751,132 research outputs found

    Systematic evaluation of software product line architectures

    Get PDF
    The architecture of a software product line is one of its most important artifacts as it represents an abstraction of the products that can be generated. It is crucial to evaluate the quality attributes of a product line architecture in order to: increase the productivity of the product line process and the quality of the products; provide a means to understand the potential behavior of the products and, consequently, decrease their time to market; and, improve the handling of the product line variability. The evaluation of product line architecture can serve as a basis to analyze the managerial and economical values of a product line for software managers and architects. Most of the current research on the evaluation of product line architecture does not take into account metrics directly obtained from UML models and their variabilities; the metrics used instead are difficult to be applied in general and to be used for quantitative analysis. This paper presents a Systematic Evaluation Method for UML-based Software Product Line Architecture, the SystEM-PLA. SystEM-PLA differs from current research as it provides stakeholders with a means to: (i) estimate and analyze potential products; (ii) use predefined basic UML-based metrics to compose quality attribute metrics; (iii) perform feasibility and trade-off analysis of a product line architecture with respect to its quality attributes; and, (iv) make the evaluation of product line architecture more flexible. An example using the SEI’s Arcade Game Maker (AGM) product line is presented as a proof of concept, illustrating SystEM-PLA activities. Metrics for complexity and extensibility quality attributes are defined and used to perform a trade-off analysis

    QEF - Quantitative Evaluation Framework: evaluating educational software

    Get PDF
    In this paper we propose a Framework QEF (Quantitative Evaluation Framework) to evaluate educational software systems built with X-TEC (Techno-Didactical Extension for Instruction/Learning Based on Computer) model, in order to validate and strengthen the potential quality of eLearning systems. The Quantitative Evaluation Framework evaluates the Educational software quality on a three dimensional space. Each dimension aggregates a set of factors. A given factor is a component that represents the system performance from a particular point of view. The quality of a given system is defined in a tridimensional Cartesian quality space and measured, in percentage, relatively to a hypothetically ideal system, represented in our quality space.info:eu-repo/semantics/publishedVersio

    A Lifecycle Which Incorporates Software Metrics

    Get PDF
    The traditional waterfall life cycle model of software development provides a systematic method to separate the development process into different stages with explicit communication boundaries between each subsequent stage. But the waterfall model does not provide quantitative measurements for the products of each phase in the software life cycle. The model provides a base to develop methodologies which emphasize the completeness of the documents, the use of certain disciplines, and the consistency among documents. On the other hand, it is very hard to use the model to develop methodologies which provide 1) the quantitative evaluation of the quality of the documents (products) from each phase, 2) feedback information to help the manager make management decisions, and 3) criteria for redesigning or recoding a system. To ensure the quality of software products, a common basis for more meaningful evaluation leading to better understanding of software quality must be provided

    Подход за оценка на качеството на софтуерната архитектура

    Get PDF
    Proper implementation of selected software product architecture is essential of the quality of its functioning. This paper offers a quantitative approach to assessing the quality of the software architecture by the degree of functionality of creating software. For this purpose is used a structural model witch links the components of software architecture in the graph of the architectural dependencies. Metrics that are calculated for evaluation of the quality of software architecture have been defined via expertise and can be presented either by crisp numbers or by fuzzy variables

    Quantitative evaluation framework of the X-TEC model

    Get PDF
    This paper outlines the way in which the X-TEC (Techno-Didactical Extension for Instruction/Learning Based on Computer) model is used in the development of educational software in order to strengthen the potential quality of e-Learning systems. The aim of the present paper is to describe the deployment phases of the X-TEC model and its global evaluation. For the evaluation of our educational software systems based on X_TEC model we propose a generic quantitative evaluation framework.info:eu-repo/semantics/publishedVersio

    A Quantitative Evaluation for Usability under Software Quality Models

    Get PDF
    Usability evaluation in Human-Computer Interaction (HCI) is done by observing the user's behaviors and reactions while performing a given task. The observers examine users' behaviors while doing assigned tasks and describe their observations in terms of usability. The usability evaluation would depend on the observers' ability or experience. It proceeds qualitatively. We propose a quantitative evaluation, which adopts attributes and metrics from System and Software Quality Requirement and Evaluation(SQuaRE) published by International Standard Organization (ISO). Furthermore, we examine qualitative observations conducted in usability testing and apply our method to make it a quantitative evaluation

    Computer System Testing

    Get PDF
    Increasingly, the quantitative evaluation of computer software is recognized as critically important to the effective functioning of computer systems. At NPS, the model for statistically analyzing software error detection and correction processes during software functional testing has been developed. The model provides decision aids for controlling the quality ofcommand and control system software. The inputs to the model are error detection histories and the outputs are forecasts of the future behavior of error detection and correction processes

    What's the PREMES behind your pattern?

    Get PDF
    Design patterns are supposed to be the well documented, tried and tested solutions to recurrent problems. Current evaluation techniques do not provide a demonstrable and holistic means to evaluate pattern quality. This paper introduces Pattern Report Cards an evaluation process for software design patterns that is demonstrable, measurable, and reproducible. A set of quality indicators for determining pattern quality has been identified, and a set of qualitative and quantitative evaluation techniques assembled to determine the quality of adherence to these indicators. Further, management and execution of the evaluation process is controlled by the PREMES framework. This framework describes a management cycle that facilitates the construction of bespoke evaluation systems for design patterns. Process tailoring is achieved by providing guidance over the selection and construction of the techniques used to assess pattern quality. Use of these techniques will help bolster existing evaluation processes, and lead to the improvement of design pattern evaluation techniques.Postprin
    corecore