2,329,030 research outputs found

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems

    Designing software to maximize learning1

    Get PDF
    This paper starts from the assumption that any evaluation of educational software should focus on whether or not, and the extent to which, it maximizes learning. It is particularly concerned with the impact of software on the quality of learning. The paper reviews key texts in the literature on learning, including some which relate directly to software development, and suggests ways in which a range of learning theories can inform the process of software design. The paper sets out to make a contribution to both the design and the evaluation of educational software

    A software technology evaluation program

    Get PDF
    A set of quantitative approaches is presented for evaluating software development methods and tools. The basic idea is to generate a set of goals which are refined into quantifiable questions which specify metrics to be collected on the software development and maintenance process and product. These metrics can be used to characterize, evaluate, predict, and motivate. They can be used in an active as well as passive way by learning form analyzing the data and improving the methods and tools based upon what is learned from that analysis. Several examples were given representing each of the different approaches to evaluation. The cost of the approaches varied inversely with the level of confidence in the interpretation of the results

    HALOE test and evaluation software

    Get PDF
    Computer programming, system development and analysis efforts during this contract were carried out in support of the Halogen Occultation Experiment (HALOE) at NASA/Langley. Support in the major areas of data acquisition and monitoring, data reduction and system development are described along with a brief explanation of the HALOE project. Documented listings of major software are located in the appendix

    Early evaluation of security functionality in software projects - some experience on using the common criteria in a quality management process

    Get PDF
    This paper documents the experiences of assurance evaluation during the early stage of a large software development project. This project researches, contracts and integrates privacy-respecting software to business environments. While assurance evaluation with ISO 15408 Common Criteria (CC) within the certification schemes is done after a system has been completed, our approach executes evaluation during the early phases of the software life cycle. The promise is to increase quality and to reduce testing and fault removal costs for later phases of the development process. First results from the still-ongoing project suggests that the Common Criteria can define a framework for assurance evaluation in ongoing development projects.Dieses Papier dokumentiert den Versuch, mittels der Common Criteria nach ISO 15408 bereits während der Erstellung eines Softwaresystems dessen Sicherheitseigenschaften zu überprüfen. Dies geschieht im Gegensatz zur üblichen Post-Entwicklungs-Evaluation

    Systematic evaluation of software product line architectures

    Get PDF
    The architecture of a software product line is one of its most important artifacts as it represents an abstraction of the products that can be generated. It is crucial to evaluate the quality attributes of a product line architecture in order to: increase the productivity of the product line process and the quality of the products; provide a means to understand the potential behavior of the products and, consequently, decrease their time to market; and, improve the handling of the product line variability. The evaluation of product line architecture can serve as a basis to analyze the managerial and economical values of a product line for software managers and architects. Most of the current research on the evaluation of product line architecture does not take into account metrics directly obtained from UML models and their variabilities; the metrics used instead are difficult to be applied in general and to be used for quantitative analysis. This paper presents a Systematic Evaluation Method for UML-based Software Product Line Architecture, the SystEM-PLA. SystEM-PLA differs from current research as it provides stakeholders with a means to: (i) estimate and analyze potential products; (ii) use predefined basic UML-based metrics to compose quality attribute metrics; (iii) perform feasibility and trade-off analysis of a product line architecture with respect to its quality attributes; and, (iv) make the evaluation of product line architecture more flexible. An example using the SEI’s Arcade Game Maker (AGM) product line is presented as a proof of concept, illustrating SystEM-PLA activities. Metrics for complexity and extensibility quality attributes are defined and used to perform a trade-off analysis
    corecore