173 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Test Flakiness Prediction Techniques for Evolving Software Systems

    Get PDF

    A holistic method for improving software product and process quality

    Get PDF
    The concept of quality in general is elusive, multi-faceted and is perceived differently by different stakeholders. Quality is difficult to define and extremely difficult to measure. Deficient software systems regularly result in failures which often lead to significant financial losses but more importantly to loss of human lives. Such systems need to be either scrapped and replaced by new ones or corrected/improved through maintenance. One of the most serious challenges is how to deal with legacy systems which, even when not failing, inevitably require upgrades, maintenance and improvement because of malfunctioning or changing requirements, or because of changing technologies, languages, or platforms. In such cases, the dilemma is whether to develop solutions from scratch or to re-engineer a legacy system. This research addresses this dilemma and seeks to establish a rigorous method for the derivation of indicators which, together with management criteria, can help decide whether restructuring of legacy systems is advisable. At the same time as the software engineering community has been moving from corrective methods to preventive methods, concentrating not only on both product quality improvement and process quality improvement has become imperative. This research investigation combines Product Quality Improvement, primarily through the re-engineering of legacy systems; and Process Improvement methods, models and practices, and uses a holistic approach to study the interplay of Product and Process Improvement. The re-engineering factor rho, a composite metric was proposed and validated. The design and execution of formal experiments tested hypotheses on the relationship of internal (code-based) and external (behavioural) metrics. In addition to proving the hypotheses, the insights gained on logistics challenges resulted in the development of a framework for the design and execution of controlled experiments in Software Engineering. The next part of the research resulted in the development of the novel, generic and, hence, customisable Quality Model GEQUAMO, which observes the principle of orthogonality, and combines a top-down analysis of the identification, classification and visualisation of software quality characteristics, and a bottom-up method for measurement and evaluation. GEQUAMO II addressed weaknesses that were identified during various GEQUAMO implementations and expert validation by academics and practitioners. Further work on Process Improvement investigated the Process Maturity and its relationship to Knowledge Sharing, resulted in the development of the I5P Visualisation Framework for Performance Estimation through the Alignment of Process Maturity and Knowledge Sharing. I5P was used in industry and was validated by experts from academia and industry. Using the principles that guided the creation of the GEQUAMO model, the CoFeD visualisation framework, was developed for comparative quality evaluation and selection of methods, tools, models and other software artifacts. CoFeD is very useful as the selection of wrong methods, tools or even personnel is detrimental to the survival and success of projects and organisations, and even to individuals. Finally, throughout the many years of research and teaching Software Engineering, Information Systems, Methodologies, I observed the ambiguities of terminology and the use of one term to mean different concepts and one concept to be expressed in different terms. These practices result in lack of clarity. Thus my final contribution comes in my reflections on terminology disambiguation for the achievement of clarity, and the development of a framework for achieving disambiguation of terms as a necessary step towards gaining maturity and justifying the use of the term “Engineering” 50 years since the term Software Engineering was coined. This research resulted in the creation of new knowledge in the form of novel indicators, models and frameworks which can aid quantification and decision making primarily on re-engineering of legacy code and on the management of process and its improvement. The thesis also contributes to the broader debate and understanding of problems relating to Software Quality, and establishes the need for a holistic approach to software quality improvement from both the product and the process perspectives

    On Run-Time Configuration Engineering

    Get PDF
    De nos jours, les utilisateurs changent le comportement de leur logiciel et l’adaptent à différentes situations et contexts, sans avoir besoin d’aucune modifications du code source ou recompilation du logiciel. En effet, les utilisateurs utilisent le mécanisme de configuration qui offre un ensemble d’options modifiables par les utilisateurs. D’après plusieurs études, des mauvaises valeurs des options de configuration causent des erreurs difficiles à déboguer. Plusieurs compagnies importantes, comme Facebook, Google et Amazon ont rencontré des pannes et erreurs sérieuses à cause de la configuration et qui sont considérées parmi les plus pires pannes dans ces compagnies. En plus, plusieurs études ont trouvé que le mécanisme de configuration augmente la complexité des logiciels et les rend plus difficile à utiliser. Ces problèmes ont un sérieux impact sur plusieurs facteurs de qualité, comme la sécurité, l’exactitude, la disponibilité, la compréhensibilité, la maintenabilité, et la performance des logiciels. Plusieurs études ont été élaborées dans des aspects spécifiques dans l’ingénierie des configurations, dont la majorité se concentrent sur le débogage des défaillances de configuration et les tests de la configuration des logiciels, tandis que peu de recherches traitent les autres aspects de l’ingénierie des configurations de logiciel, comme la création et la maintenance des options de configuration. Par contre, nous pensons que la configuration des logiciels n’a pas seulement un impact sur l’exactitude d’un logiciel, mais peut avoir un impact sur d’autres métriques de qualité comme la compréhensibilité et la maintenabilité. Dans cette thèse, nous faisons d’abord un pas en arrière pour mieux comprendre les activités principales liées du processus de l’ingénierie des configurations, avant d’évaluer l’impact d’un catalogue de bonnes pratiques sur l’exactitude et la performance du processus de la configuration des logiciels. Pour ces raisons, nous avons conduit un ensemble d’études empiriques qualitatives et quantitatives sur des grands projets libres. On a conduit une étude qualitative en premier lieu, dans laquelle nous avons essayé de comprendre le processus de l’ingénierie de configuration, les enjeux et problèmes que les développeurs rencontrent durant ce processus, et qu’est ce que les développeurs et chercheurs proposent pour aider les développeurs à améliorer la qualité de l’ingénierie de la configuration logiciel. En réalisant 14 entrevues semi structurées, un sondage et une revue systématique de littérature, nous avons défini un processus de l’ingénierie de configuration invoquant 9 activités, un ensemble de 22 challenges rencontrés en pratique et 24 recommandations des experts.----------ABSTRACT: Modern software applications allow users to change the behavior of a software application and adapt it to different situations and contexts, without requiring any source code modifications or recompilations. To this end, applications leverage a wide range of mechanisms of software configuration that provide a set of options that can be changed by users. According to several studies, incorrect values of software configuration options cause severe errors that are hard-to-debug. Major companies such as Facebook, Google, and Amazon faced serious outages and failures due to configuration, which are considered as some of the worst outages in these companies. In addition, several studies found that the mechanism of software configuration increases the complexity of a software system and makes it hard to use. Such problems have a serious impact on different quality factors, such as security, correctness, availability, comprehensibility, maintainability, and performance of software systems. Several studies have been conducted on specific aspects of configuration engineering, with most of them focusing on debugging configuration failures and testing software configurations, while only few research efforts focused on other aspects of configuration engineering, such as the creation and maintenance of configuration options. However, we think that software configuration can not only have a negative impact on the correctness of a software system, but also on other quality metrics, such as its comprehensibility and maintainability. In this thesis, we first take a step back to better understand the main activities involved in the process of run-time configuration engineering, before evaluating the impact of a catalog of best practices on the correctness and performance of the configuration engineering process. For these purposes, we conducted several qualitative and quantitative empirical studies on large repositories and open source projects. We first conducted a qualitative study, in which we tried to understand the configuration engineering process, the challenges and problems developers face during this process, and what practitioners and researchers recommend to help developers to improve their software configuration engineering quality. By conducting 14 semi-structured interviews, a large survey, and a systematic literature review, we identified a process of configuration engineering involving 9 activities, a set of 22 challenges faced in practice, and a set of 24 recommendations by experts

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Posttraumatic Stress Disorder Rates Among Children in Foster Versus Family Kinship Care

    Get PDF
    Children in foster care represent a highly traumatized population. However, trauma researchers studying this population have focused primarily on maltreatment rather than the full spectrum of posttraumatic stress disorder (PTSD) attributes and symptoms for children living in nonkinship foster homes versus kinship foster homes. The purpose of this study was to address this gap in the literature, as well as examine the benefits and limitations of children placed in kinship and nonkinship foster homes. Attachment theory was the theoretical framework. The research questions centered on the prevalence of specific types of posttraumatic stress symptoms for a sample of children living in both kinship and nonkinship foster homes, to determine which placement setting was more beneficial for children diagnosed with PTSD. This quasi-experimental study examined 221 foster parents who participated in an online survey. Research findings suggest that there was no significant differences among children in kinship and nonkinship foster homes and reported PTSD symptoms, however there was a significant difference among reported PTSD symptoms and psychotherapy received suggesting children who received mental health treatment services reported less PTSD symptoms Findings may contribute to social change by providing knowledge that Child welfare agencies can use to evaluate childhood PTSD and support foster care children\u27s foster home placement stability, which may ultimately guide child welfare practice and policy

    Variability Bugs::Program and Programmer Perspective

    Get PDF

    Cognitive-support code review tools : improved efficiency of change-based code review by guiding and assisting reviewers

    Get PDF
    Code reviews, i.e., systematic manual checks of program source code by other developers, have been an integral part of the quality assurance canon in software engineering since their formalization by Michael Fagan in the 1970s. Computer-aided tools supporting the review process have been known for decades and are now widely used in software development practice. Despite this long history and widespread use, current tools hardly go beyond simple automation of routine tasks. The core objective of this thesis is to systematically develop options for improved tool support for code reviews and to evaluate them in the interplay of research and practice. The starting point of the considerations is a comprehensive analysis of the state of research and practice. Interview and survey data collected in this thesis show that review processes in practice are now largely change-based, i.e., based on checking the changes resulting from the iterative-incremental evolution of software. This is true not only for open source projects and large technology companies, as shown in previous research, but across the industry. Despite the common change-based core process, there are various differences in the details of the review processes. The thesis shows possible factors influencing these differences. Important factors seem to be the process variants supported and promoted by the used review tool. In contrast, the used tool has little influence on the fundamental decision to use regular code reviews. Instead, the interviews and survey data suggest that the decision to use code reviews depends more on cultural factors. Overall, the analysis of the state of research and practice shows that there is a potential for developing better code review tools, and this potential is associated with the opportunity to increase efficiency in software development. The present thesis argues that the most promising approach for better review support is reducing the reviewer's cognitive load when reviewing large code changes. Results of a controlled experiment support this reasoning. The thesis explores various possibilities for cognitive support, two of these in detail: Guiding the reviewer by identifying and presenting a good order of reading the code changes being reviewed, and assisting the reviewer through automatic determination of change parts that are irrelevant for review. In both cases, empirical data is used to both generate and test hypotheses. In order to demonstrate the practical suitability of the techniques, they are also used in a partner company in regular development practice. For this evaluation of the cognitive support techniques in practice, a review tool which is suitable for use in the partner company and as a platform for review research is needed. As such a tool was not available, the code review tool "CoRT" has been developed. Here, too, a combination of an analysis of the state of research, support of design decisions through scientific studies and evaluation in practical use was employed. Overall, the results of this thesis can be roughly divided into three blocks: Researchers and practitioners working on improving review tools receive an empirically and theoretically sound catalog of requirements for cognitive-support review tools. It is available explicitly in the form of essential requirements and possible forms of realization, and additionally implicitly in the form of the tool "CoRT". The second block consists of contributions to the fundamentals of review research, ranging from the comprehensive analysis of review processes in practice to the analysis of the impact of cognitive abilities (specifically, working memory capacity) on review performance. As the third block, innovative methodological approaches have been developed within this thesis, e.g., the use of process simulation for the development of heuristics for development teams and new approaches in repository and data mining
    • …
    corecore