1,155,757 research outputs found

    Software quality model for the evaluation of semantic technologies

    Get PDF
    In order to obtain high-quality software products, the specification and evaluation of quality during the software development process is of crucial importance. One important component in software evaluation is the software quality model, since it provides the basis for software evaluation and gives a better insight of the software characteristics that influence its quality. Furthermore, quality models also ensure a consistent terminology for software product quality and provide guidance for its measurement. In recent years, semantic technologies have started to gain importance and, as the field becomes more and more popular, the number of these technologies is increasing exponentially. Just as with any other software product, the quality of semantic technologies is an important concern, and multiple evaluations of semantic technologies have been performed. However, the problem is that there is no consistent terminology for describing the quality of semantic technologies and it is difficult to compare them because of differences in the meaning of the evaluation characteristics used. Also, existing software quality models do not define those quality characteristics that are specific to semantic technologies. This thesis presents a quality model for semantic technologies which aims to provide a common ground in the field of semantic technology evaluation. It also presents a new method for extending software quality models, based on a bottom-up approach, that is used to define the quality model for semantic technologies. Finally, this thesis describes the use of the semantic technology quality model in a web application that visualizes semantic technology evaluation results and provides semantic technology recommendations

    Unwasted DASE : Lean Architecture Evaluation

    Get PDF
    A software architecture evaluation is a way to assess the quality of the technical design of a product. It is also a prime opportunity to discuss the business goals of the product and how the design bears on them. But architecture evaluation methods are seen as hard to learn and costly to use. We present DASE, a compact approach that combines carefully selected key parts of two existing architecture evaluation methods while making evaluation lean and fast. We have applied DASE in three industrial cases and the early results show that even a one-day evaluation workshop yields valuable results at a modest cost.A software architecture evaluation is a way to assess the quality of the technical design of a product. It is also a prime opportunity to discuss the business goals of the product and how the design bears on them. But architecture evaluation methods are seen as hard to learn and costly to use. We present DASE, a compact approach that combines carefully selected key parts of two existing architecture evaluation methods while making evaluation lean and fast. We have applied DASE in three industrial cases and the early results show that even a one-day evaluation workshop yields valuable results at a modest cost.A software architecture evaluation is a way to assess the quality of the technical design of a product. It is also a prime opportunity to discuss the business goals of the product and how the design bears on them. But architecture evaluation methods are seen as hard to learn and costly to use. We present DASE, a compact approach that combines carefully selected key parts of two existing architecture evaluation methods while making evaluation lean and fast. We have applied DASE in three industrial cases and the early results show that even a one-day evaluation workshop yields valuable results at a modest cost.Peer reviewe

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Using smartwatch sensors to support the acquisition of sleep quality data for supervised machine learning

    Get PDF
    It is a common practice in supervised learning techniques to use human judgment to label training data. For this process, data reliability is fundamental. Research on sleep quality found that human sleep stage misperception may occur. In this paper we propose that human judgment be supported by software-driven evaluation based on physiological parameters, selecting as training data only data sets for which human judgment and software evaluation are aligned. A prototype system to provide a broad-spectrum perception of sleep quality data comparable with human judgment is presented. The system requires users to wear a smartwatch recording heartbeat rate and wrist acceleration. It estimates an overall percentage of the sleep stages, to achieve an effective approximation of conventional sleep measures, and to provide a three-class sleep quality evaluation. The training data are composed of the heartbeat rate, the wrist acceleration and the three-class sleep quality. As a proof of concept, we experimented the approach on three subjects, each one over 20 nights

    Static Analyser for Java-Based Object-Oriented Software Metrics

    Get PDF
    Software metrics play a major role In the software development. Not only software metrics help in understanding the size and complexity of software systems, but they are also helpful in improving the quality of software systems. For object-oriented systems, a large number of metrics have been established. These metrics should be supported by automated collection tools. Automated tools are useful for measuring and improving the quality of software systems. One such tool is a static analyser. A static analyser has been developed for a subset of Java language. A number of object-oriented software metrics has been evaluated using attribute grammar approach. Attribute grammar approach is considered as a well-defined approach to the software metrics evaluation since it is based on the measurement of the source code itself. New definitions for a number of object-oriented metrics have been specified using attribute grammars. This tool has been built using C language. Lexical analyser and syntax analyser have been generated using lex and yacc tools under linux operating system. Four object-oriented metrics have been evaluated. These metrics are Depth of Inheritance Tree metric, Number of Children metric, Response For a Class metric, and Coupling Between Object Classes metric. The software metrics will be produced in the common metrics format that is used in SCOPE project

    An automated framework for software test oracle

    Get PDF
    Context: One of the important issues of software testing is to provide an automated test oracle. Test oracles are reliable sources of how the software under test must operate. In particular, they are used to evaluate the actual results that produced by the software. However, in order to generate an automated test oracle, oracle challenges need to be addressed. These challenges are output-domain generation, input domain to output domain mapping, and a comparator to decide on the accuracy of the actual outputs. Objective: This paper proposes an automated test oracle framework to address all of these challenges. Method: I/O Relationship Analysis is used to generate the output domain automatically and Multi-Networks Oracles based on artificial neural networks are introduced to handle the second challenge. The last challenge is addressed using an automated comparator that adjusts the oracle precision by defining the comparison tolerance. The proposed approach was evaluated using an industry strength case study, which was injected with some faults. The quality of the proposed oracle was measured by assessing its accuracy, precision, misclassification error and practicality. Mutation testing was considered to provide the evaluation framework by implementing two different versions of the case study: a Golden Version and a Mutated Version. Furthermore, a comparative study between the existing automated oracles and the proposed one is provided based on which challenges they can automate. Results: Results indicate that the proposed approach automated the oracle generation process 97% in this experiment. Accuracy of the proposed oracle was up to 98.26%, and the oracle detected up to 97.7% of the injected faults. Conclusion: Consequently, the results of the study highlight the practicality of the proposed oracle in addition to the automation it offers

    Evaluation and Measurement of Software Process Improvement -- A Systematic Literature Review

    Full text link
    BACKGROUND: Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE: This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD: The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS: Seven distinct evaluation strategies were identified, wherein the most common one, "Pre-Post Comparison" was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, "Project" represents the majority with 66 percent. CONCLUSION: The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that "Pre-Post Comparison" was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used

    Selecting a Software Tool for ITIL using a Multiple Criteria Decision Analysis Approach

    Get PDF
    The opportunity to improve service quality using ITIL has led many organizations to invest in the implementation of this framework. Selecting a software tool for ITIL is still one of the most difficult decisions due to lack of meaningful evaluation criteria and guidelines to help on that decision, making this, one of the most important and error-triggering steps in this way. A multi-criteria value model to evaluate software tools for ITIL using a multi-criteria decision analysis (MCDA) approach based on MACBETH is then proposed to address this problem. A focus on the functionality of the tool to extract criteria from the literature to assess four representative software solutions for ITIL in the market is made along with its demonstration in a company of the bank sector. Finally, using Moody and Shanks Framework, the proposed method is evaluated showing that is suitable for evaluating software tools for ITIL
    corecore