374,802 research outputs found

    Component-based software engineering: a quantitative approach

    Get PDF
    Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaBackground: Often, claims in Component-Based Development (CBD) are only supported by qualitative expert opinion, rather than by quantitative data. This contrasts with the normal practice in other sciences, where a sound experimental validation of claims is standard practice. Experimental Software Engineering (ESE) aims to bridge this gap. Unfortunately, it is common to find experimental validation efforts that are hard to replicate and compare, to build up the body of knowledge in CBD. Objectives: In this dissertation our goals are (i) to contribute to evolution of ESE, in what concerns the replicability and comparability of experimental work, and (ii) to apply our proposals to CBD, thus contributing to its deeper and sounder understanding. Techniques: We propose a process model for ESE, aligned with current experimental best practices, and combine this model with a measurement technique called Ontology-Driven Measurement (ODM). ODM is aimed at improving the state of practice in metrics definition and collection, by making metrics definitions formal and executable,without sacrificing their usability. ODM uses standard technologies that can be well adapted to current integrated development environments. Results: Our contributions include the definition and preliminary validation of a process model for ESE and the proposal of ODM for supporting metrics definition and collection in the context of CBD. We use both the process model and ODM to perform a series experimental works in CBD, including the cross-validation of a component metrics set for JavaBeans, a case study on the influence of practitioners expertise in a sub-process of component development (component code inspections), and an observational study on reusability patterns of pluggable components (Eclipse plug-ins). These experimental works implied proposing, adapting, or selecting adequate ontologies, as well as the formal definition of metrics upon each of those ontologies. Limitations: Although our experimental work covers a variety of component models and, orthogonally, both process and product, the plethora of opportunities for using our quantitative approach to CBD is far from exhausted. Conclusions: The main contribution of this dissertation is the illustration, through practical examples, of how we can combine our experimental process model with ODM to support the experimental validation of claims in the context of CBD, in a repeatable and comparable way. In addition, the techniques proposed in this dissertation are generic and can be applied to other software development paradigms.Departamento de Informática of the Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa (FCT/UNL); Centro de Informática e Tecnologias da Informação of the FCT/UNL; Fundação para a Ciência e Tecnologia through the STACOS project(POSI/CHS/48875/2002); The Experimental Software Engineering Network (ESERNET);Association Internationale pour les Technologies Objets (AITO); Association forComputing Machinery (ACM

    E-Debitum: managing software energy debt

    Get PDF
    35th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW ’20) - International Workshop on Sustainable Software Engineering (SUSTAIN-SE)This paper extends previous work on the concept of a new software energy metric: Energy Debt. This metric is a reflection on the implied cost, in terms of energy consumption over time, of choosing an energy flawed software implementation over a more robust and efficient, yet time consuming, approach. This paper presents the implementation a SonarQube tool called E-Debitum which calculates the energy debt of Android applications throughout their versions. This plugin uses a robust, well defined, and extendable smell catalogue based on current green software literature, with each smell defining the potential energy savings. To conclude, an experimental validation of E-Debitum was executed on 3 popular Android applications with various releases, showing how their energy debt fluctuated throughout releases.This work is financed by National Funds through the Portuguese funding agency, FCT -Fundação para a Ciência e a Tecnologia within project UIDB/50014/2020

    Strategies for simulation software quality assurance applied to open source DEM

    Get PDF
    We present a strategy to improve the software quality for scientific simulation software, applied to the open source DEM code LIGGGHTS [1] [2]. We aim to improve the quality of the LIGGGHTS DEM code by two measures: Firstly, making the simulation code open source gives the whole user community the possibility to detect bugs in the source code and make suggestions to improve the code quality. Secondly, we apply a test harness, which is an important part of the work-flow for quality assurance in software engineering [5]. In the case of scientific simulation software, it consists of a set of simulation examples that should span the range of applicability of the software as good as possible. Technically, in our case it consists of a set of 10-50 LIGGGHTS simulations and is being run automatically on our cluster, where the number of processors, the code features and the numerical models are varied. Qualitative results are automatically extracted and are plotted for comparison, so thus a huge parameter space of flow regimes, numerical models, code features and parallelization situations can be governed. A test harness can aid in (a) finding bugs in the software, (b) checking parallel efficiency and consistency, (c) comparing different numerical models, and, most importantly, (d) experimental validation. Parallel consistency means that within a parallel framework, we need to have the possibility to compare the answers that a run with a different number of processors gives and the time that it takes to compute them. Experimental validation is especially important for scientific simulations. If experimental data is available for a test case, the experimental data is automatically compared to the numerical results, by means of global quantities such number of particles in the simulation, translational and rotational kinetic energy, thermal energy etc. The LIGGGHTS test harness aims to be a transparent and open community effort that everybody can contribute to in order to improve the quality of the LIGGGHTS code. We illustrate the usefulness of the test harness with several examples, where we especially focus on experimental validation

    DoKnowMe: Towards a Domain Knowledgedriven Methodology for Performance Evaluation

    Get PDF
    Software engineering considers performance evaluation to be one of the key portions of software quality assurance. Unfortunately, there seems to be a lack of standard methodologies for performance evaluation even in the scope of experimental computer science. Inspired by the concept of “instantiation” in object-oriented programming, we distinguish the generic performance evaluation logic from the distributed and ad-hoc relevant studies, and develop an abstract evaluation methodology (by analogy of “class”) we name Domain Knowledge-driven Methodology (DoKnowMe). By replacing five predefined domain-specific knowledge artefacts, DoKnowMe can be instantiated into specific methodologies (by analogy of “object”) to guide evaluators in performance evaluation of different software and even computing systems. We also propose a generic validation framework with four indicators (i.e. usefulness, feasibility, effectiveness and repeatability), and use it to validate DoKnowMe in the Cloud services evaluation domain. Given the positive and promising validation result, we plan to integrate more common evaluation strategies to improve DoKnowMe and further focus on the performance evaluation of Cloud autoscaler systems

    Towards a software evolution benchmark

    Get PDF
    Case-studies are extremely popular in rapidly evolving research disciplines such as software engineering because they allow for a quick but fair assessment of new techniques. Unfortunately, a proper experimental set-up is rarely the case: all too often case-studies are based on a single small toy-example chosen to favour the technique under study. Such lack of scientific rigor prevents fair evaluation and has disastrous consequences for the credibility of our field. In this paper, we propose to use a representative set of cases as a benchmark for comparing various techniques dealing with software evolution. We hope that this proposal will launch a consensus building process that eventually must lead to a scientifically sound validation method for researchers investigating reverse- and reengineering techniques

    Usability Inspection in Model-Driven Web Development: Empirical Validation in WebML

    Full text link
    There is a lack of empirically validated usability evaluation methods that can be applied to models in model-driven Web development. Evaluation of these models allows an early detection of usability problems perceived by the end-user. This motivated us to propose WUEP, a usability inspection method which can be integrated into different model-driven Web development processes. We previously demonstrated how WUEP can effectively be used when following the Object-Oriented Hypermedia method. In order to provide evidences about WUEP’s generalizability, this paper presents the operationalization and empirical validation of WUEP into another well-known method: WebML. The effectiveness, efficiency, perceived ease of use, and satisfaction of WUEP were evaluated in comparison to Heuristic Evaluation (HE) from the viewpoint of novice inspectors. The results show that WUEP was more effective and efficient than HE when detecting usability problems on models. Also, inspectors were satisfied when applying WUEP, and found it easier to use than HE.Fernández Martínez, A.; Abrahao Gonzales, SM.; Insfrán Pelozo, CE.; Matera, M. (2013). Usability Inspection in Model-Driven Web Development: Empirical Validation in WebML. Lecture Notes in Computer Science. 8107:740-756. doi:10.1007/978-3-642-41533-3_457407568107Abrahão, S., Iborra, E., Vanderdonckt, J.: Usability Evaluation of User Interfaces Generated with a Model-Driven Architecture Tool. In: Maturing Usability: Quality in Software, Interaction and Value, pp. 3–32. Springer (2007)Atterer, R., Schmidt, A.: Adding Usability to Web Engineering Models and Tools. In: Lowe, D.G., Gaedke, M. (eds.) ICWE 2005. LNCS, vol. 3579, pp. 36–41. Springer, Heidelberg (2005)Basili, V., Rombach, H.: The TAME Project: Towards Improvement-Oriented Software Environments. IEEE Transactions on Software Engineering 14(6), 758–773 (1988)Briand, L., Labiche, Y., Di Penta, M., Yan-Bondoc, H.: An experimental investigation of formality in UML-based development. IEEE TSE 31(10), 833–849 (2005)Carifio, J., Perla, R.: Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences 3(3), 106–116 (2007)Ceri, S., Fraternali, P., Bongio, A.: Web modeling language (WebML): a modeling language for designing Web sites. In: 9th International World Wide Web Conference, pp. 137–157 (2000)Ceri, S., Fraternali, P., Acerbis, R., Bongio, A., Butti, S., Ciapessoni, F., Conserva, C., Elli, R., Greppi, C., Tagliasacchi, M., Toffetti, G.: Architectural issues and solutions in the development of data-intensive Web applications. In: Proceedings of the 1st Biennial Conference on Innovative Data Systems Research, Asilomar, CA (2003)Conte, T., Massollar, J., Mendes, E., Travassos, G.H.: Usability Evaluation Based on Web Design Perspectives. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), pp. 146–155 (2007)Fernandez, A., Insfran, E., Abrahão, S.: Usability evaluation methods for the Web: a systematic mapping study. Information and Software Technology 53, 789–817 (2011)Fernandez, A., Abrahão, S., Insfran, E.: A Web usability evaluation process for model-driven Web development. In: Mouratidis, H., Rolland, C. (eds.) CAiSE 2011. LNCS, vol. 6741, pp. 108–122. Springer, Heidelberg (2011)Fernandez, A., Abrahão, S., Insfran, E., Matera, M.: Further Analysis on the Validation of a Usability Inspection Method for Model-Driven Web Development. In: 6th International Symposium on Empirical Software Engineering and Measurement (ESEM 2012), pp. 153–156 (2012)Fernandez, A., Abrahão, S., Insfran, E.: Empirical Validation of a Usability Inspection Method for Model-Driven Web Development. Journal of Systems and Software 86, 161–186 (2013)Fraternali, P., Matera, M., Maurino, A.: WQA: an XSL Framework for Analyzing the Quality of Web Applications. In: Proceedings of IWWOST 2002 - ECOOP 2002 Workshop, Malaga, Spain (2002)Hornbæk, K.: Dogmas in the assessment of usability evaluation methods. Behaviour & Information Technology 29(1), 97–111 (2010)Hwang, W., Salvendy, G.: Number of people required for usability evaluation: the 10±2 rule. Communications of the ACM 53(5), 130–113 (2010)International Organization for Standardization: ISO/IEC 25000, Software Engineering – Software Product Quality Requirements and Evaluation (SQuaRE) – Guide to SQuaRE (2005)Juristo, N., Moreno, A.M.: Basics of Software Engineering Experimentation. Kluwer Academic Publishers (2001)Juristo, N., Moreno, A., Sanchez-Segura, M.I.: Guidelines for eliciting usability functionalities. IEEE Transactions on Software Engineering 33(11), 744–758 (2007)Matera, M., Costabile, M.F., Garzotto, F., Paolini, P.: SUE inspection: an effective method for systematic usability evaluation of hypermedia. IEEE Transactions on Systems, Man, and Cybernetics, Part A 32(1), 93–103 (2002)Matera, M., Rizzo, F., Carughi, G.: Web Usability: Principles and Evaluation Methods. In: Web Engineering, pp. 143–180. Springer (2006)Maxwell, K.: Applied Statistics for Software Managers. Software Quality Institute Series. Prentice Hall (2002)Molina, F., Toval, A.: Integrating usability requirements that can be evaluated in design time into Model Driven Engineering of Web Information Systems. Advances in Engineering Software 40(12), 1306–1317 (2009)Moreno, N., Vallecillo, A.: Towards interoperable Web engineering methods. Journal of the American Society for Information Science and Technolog 59(7), 1073–1092 (2008)Neuwirth, C.M., Regli, S.H.: IEEE Internet Computing Special Issue on Usability and the Web 6(2) (2002)Nielsen, J.: Heuristic evaluation. In: Usability Inspection Methods. John Wiley & Sons, NY (1994)Offutt, J.: Quality attributes of Web software applications. IEEE Software: Special Issue on Software Engineering of Internet Software, 25–32 (2002)Panach, I., Condori, N., Valverde, F., Aquino, N., Pastor, O.: Understandability measurement in an early usability evaluation for MDD. In: International Symposium on Empirical Software Engineering (ESEM 2008), pp. 354–356 (2008)Webratio. Success stories, Online article, http://www.webratio.com/portal/content/en/success-storiesWohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Weslen, A.: Experimentation in Software Engineering - An Introduction. Kluwer (2000

    CupCleaner: A Data Cleaning Approach for Comment Updating

    Full text link
    Recently, deep learning-based techniques have shown promising performance on various tasks related to software engineering. For these learning-based approaches to perform well, obtaining high-quality data is one fundamental and crucial issue. The comment updating task is an emerging software engineering task aiming at automatically updating the corresponding comments based on changes in source code. However, datasets for the comment updating tasks are usually crawled from committed versions in open source software repositories such as GitHub, where there is lack of quality control of comments. In this paper, we focus on cleaning existing comment updating datasets with considering some properties of the comment updating process in software development. We propose a semantic and overlapping-aware approach named CupCleaner (Comment UPdating's CLEANER) to achieve this purpose. Specifically, we calculate a score based on semantics and overlapping information of the code and comments. Based on the distribution of the scores, we filter out the data with low scores in the tail of the distribution to get rid of possible unclean data. We first conducted a human evaluation on the noise data and high-quality data identified by CupCleaner. The results show that the human ratings of the noise data identified by CupCleaner are significantly lower. Then, we applied our data cleaning approach to the training and validation sets of three existing comment updating datasets while keeping the test set unchanged. Our experimental results show that even after filtering out over 30\% of the data using CupCleaner, there is still an improvement in all performance metrics. The experimental results on the cleaned test set also suggest that CupCleaner may provide help for constructing datasets for updating-related tasks
    corecore