32 research outputs found

    Detailed Overview of Software Smells

    Get PDF
    This document provides an overview of literature concerning software smells covering various dimensions of smells along with their corresponding references

    Performance Problem Diagnostics by Systematic Experimentation

    Get PDF
    In this book, we introduce an automatic, experiment-based approach for performance problem diagnostics in enterprise software systems. The proposed approach systematically searches for root causes of detected performance problems by executing series of systematic performance tests. The presented approach is evaluated by various case studies showing that the presented approach is applicable to a wide range of contexts

    Industry–Academia Research Collaboration and Knowledge Co-creation: Patterns and Anti-patterns

    Get PDF
    Increasing the impact of software engineering research in the software industry and the society at large has long been a concern of high priority for the software engineering community. The problem of two cultures, research conducted in a vacuum (disconnected from the real world), or misaligned time horizons are just some of the many complex challenges standing in the way of successful industry–academia collaborations. This article reports on the experience of research collaboration and knowledge co-creation between industry and academia in software engineering as a way to bridge the research–practice collaboration gap. Our experience spans 14 years of collaboration between researchers in software engineering and the European and Norwegian software and IT industry. Using the participant observation and interview methods, we have collected and afterwards analyzed an extensive record of qualitative data. Drawing upon the findings made and the experience gained, we provide a set of 14 patterns and 14 anti-patterns for industry–academia collaborations, aimed to support other researchers and practitioners in establishing and running research collaboration projects in software engineering.publishedVersio

    Acta Cybernetica : Volume 23. Number 2.

    Get PDF

    Trading Indistinguishability-based Privacy and Utility of Complex Data

    Get PDF
    The collection and processing of complex data, like structured data or infinite streams, facilitates novel applications. At the same time, it raises privacy requirements by the data owners. Consequently, data administrators use privacy-enhancing technologies (PETs) to sanitize the data, that are frequently based on indistinguishability-based privacy definitions. Upon engineering PETs, a well-known challenge is the privacy-utility trade-off. Although literature is aware of a couple of trade-offs, there are still combinations of involved entities, privacy definition, type of data and application, in which we miss valuable trade-offs. In this thesis, for two important groups of applications processing complex data, we study (a) which indistinguishability-based privacy and utility requirements are relevant, (b) whether existing PETs solve the trade-off sufficiently, and (c) propose novel PETs extending the state-of-the-art substantially in terms of methodology, as well as achieved privacy or utility. Overall, we provide four contributions divided into two parts. In the first part, we study applications that analyze structured data with distance-based mining algorithms. We reveal that an essential utility requirement is the preservation of the pair-wise distances of the data items. Consequently, we propose distance-preserving encryption (DPE), together with a general procedure to engineer respective PETs by leveraging existing encryption schemes. As proof of concept, we apply it to SQL log mining, useful for database performance tuning. In the second part, we study applications that monitor query results over infinite streams. To this end, -event differential privacy is state-of-the-art. Here, PETs use mechanisms that typically add noise to query results. First, we study state-of-the-art mechanisms with respect to the utility they provide. Conducting the so far largest benchmark that fulfills requirements derived from limitations of prior experimental studies, we contribute new insights into the strengths and weaknesses of existing mechanisms. One of the most unexpected, yet explainable result, is a baseline supremacy. It states that one of the two baseline mechanisms delivers high or even the best utility. A natural follow-up question is whether baseline mechanisms already provide reasonable utility. So, second, we perform a case study from the area of electricity grid monitoring revealing two results. First, achieving reasonable utility is only possible under weak privacy requirements. Second, the utility measured with application-specific utility metrics decreases faster than the sanitization error, that is used as utility metric in most studies, suggests. As a third contribution, we propose a novel differential privacy-based privacy definition called Swellfish privacy. It allows tuning utility beyond incremental -event mechanism design by supporting time-dependent privacy requirements. Formally, as well as by experiments, we prove that it increases utility significantly. In total, our thesis contributes substantially to the research field, and reveals directions for future research

    Commits Analysis for Software Refactoring Documentation and Recommendation

    Full text link
    Software projects frequently evolve to meet new requirements and/or to fix bugs. While this evolution is critical, it may have a negative impact on the quality of the system. To improve the quality of software systems, the first step is “detection" of code antipatterns to be restructured which can be considered as “refactoring opportunities". The second step is the “prioritization" of code fragments to be refactored/fixed. The third step is “recommendation" of refactorings to fix the detected quality issues. The fourth step is “testing" the recommended refactorings to evaluate their correctness. The fifth step is the “documentation" of the applied refactorings. In this thesis, we addressed the above five steps: 1. We designed a bi-level multi-objective optimization approach to enable the generation of antipattern examples that can improve the efficiency of detection rules for bad quality designs. 2. Regarding refactoring recommendations, we first identify refactoring opportunities by analyzing developer commit messages and quality of changed files, then we distill this knowledge into usable context driven refactoring recommendations to complement static and dynamic analysis of code. 3. We proposed an interactive refactoring recommendation approach that enables developers to pinpoint their preferences simultaneously in the objective (quality metrics) and decision (code location) spaces. 4. We proposed a semi-automated refactoring documentation bot that helps developers to interactively check and validate the documentation of the refactorings and/or quality improvements at the file level for each opened pull-request before being reviewed or merged to the master 5. We performed interviews with and a survey of practitioners as well as a quantitative analysis of 1,193 commit messages containing refactorings to establish a refactoring documentation model as a set of components. 6. We formulated the recommendation of code reviewers as a multi-objective search problem to balance the conflicting objectives of expertise, availability, and history of collaborations. 7. We built a dataset composed of 50,000+ composite code changes pertaining to more than 7,000 open-source projects. Then, we proposed and evaluated a new deep learning technique to generate commit messages for composite code changes based on an attentional encoder-decoder with two encoders and BERT embeddings.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/169486/1/Soumaya Rebai final dissertation.pdfDescription of Soumaya Rebai final dissertation.pdf : Dissertatio

    Building and evaluating a theory of architectural technical debt in software-intensive systems

    Get PDF
    Architectural technical debt in software-intensive systems is a metaphor used to describe the “big” design decisions (e.g., choices regarding structure, frameworks, technologies, languages, etc.) that, while being suitable or even optimal when made, significantly hinder progress in the future. While other types of debt, such as code-level technical debt, can be readily detected by static analyzers, and often be refactored with minimal or only incremental efforts, architectural debt is hard to be identified, of wide-ranging remediation cost, daunting, and often avoided. In this study, we aim at developing a better understanding of how software development organizations conceptualize architectural debt, and how they deal with it. In order to do so, in this investigation we apply a mixed empirical method, constituted by a grounded theory study followed by focus groups. With the grounded theory method we construct a theory on architectural technical debt by eliciting qualitative data from software architects and senior technical staff from a wide range of heterogeneous software development organizations. We applied the focus group method to evaluate the emerging theory and refine it according to the new data collected. The result of the study, i.e., a theory emerging from the gathered data, constitutes an encompassing conceptual model of architectural technical debt, identifying and relating concepts such as its symptoms, causes, consequences, management strategies, and communication problems. From the conducted focus groups, we assessed that the theory adheres to the four evaluation criteria of classic grounded theory, i.e., the theory fits its underlying data, is able to work, has relevance, and is modifiable as new data appears. By grounding the findings in empirical evidence, the theory provides researchers and practitioners with novel knowledge on the crucial factors of architectural technical debt experienced in industrial contexts
    corecore