20 research outputs found

    Technical Debt Management: The Road Ahead for Successful Software Delivery

    Full text link
    Technical Debt, considered by many to be the 'silent killer' of software projects, has undeniably become part of the everyday vocabulary of software engineers. We know it compromises the internal quality of a system, either deliberately or inadvertently. We understand Technical Debt is not all derogatory, often serving the purpose of expediency. But, it is associated with a clear risk, especially for large and complex systems with extended service life: if we do not properly manage Technical Debt, it threatens to "bankrupt" those systems. Software engineers and organizations that develop software-intensive systems are facing an increasingly more dire future state of those systems if they do not start incorporating Technical Debt management into their day to day practice. But how? What have the wins and losses of the past decade of research and practice in managing Technical Debt taught us and where should we focus next? In this paper, we examine the state of the art in both industry and research communities in managing Technical Debt; we subsequently distill the gaps in industrial practice and the research shortcomings, and synthesize them to define and articulate a vision for what Technical Debt management looks like five years hence.Comment: 16 page

    An approach for quantitative aggregation of evidence from controlled experiments in software engineering

    No full text
    Empirical studies are necessary to gain reliable insights into the effects of software engineering technologies and to allow controlling risks associated with their usage. Recently, many empirical studies have been run in many software engineering areas (e.g., inspections). However, in order to be useful for decision-making, synthesis is required. Synthesis means to analyze, combine, summarize, and generalize the results of empirical studies. However, software engineering lacks a systematic approach for synthesis: Today, most syntheses in software engineering use narrative, informal summaries. These narrative reviews suffer from a number of weaknesses; in particular, they are subjective and thus often incorrect

    Experiences with a Case Study on Pair Programming

    No full text
    Abstract. Agile methods are becoming more and more popular. Most well known among them is probably Extreme Programming (XP) [2]. One key practice of XP is Pair Programming (PP), where two developers work simultaneously on a programming task. However, despite their popularity, few is known about limitations of these methods in terms of empirical knowledge. Some empirical studies exist on Pair Programming [5][8]. These studies compared PP to solo programming and were conducted for small, isolated tasks. In this paper, we describe a case study conducted in the context of a more realistic task within a university practical course, conducted in teams of six students and comprising about 700 person-hours of total effort. Within our case study setting, we were able to find weak support for the results achieved in earlier studies. More importantly, we describe experiences we made in conducting the case study and suggest improvements for future investigations. 1

    Challenges in Assessing Technical Debt Based on Dynamic Runtime Data

    No full text
    Existing definitions and metrics of technical debt (TD) tend to focus on static properties of software artifacts, in particular on code measurement. Our experience from software renovation projects is that dynamic aspects - runtime indicators of TD - often play a major role. In this position paper, we present insights and solution ideas gained from numerous software renovation projects at QAware and from a series of interviews held as part of the ProDebt research project. We interviewed ten practitioners from two German software companies in order to understand current requirements and potential solutions to current problems regarding TD. Based on the interview results, we motivate the need for measuring dynamic indicators of TD from the practitioners' perspective, including current practical challenges. We found that the main challenges include a lack of production-ready measurement tools for runtime indicators, the definition of proper metrics and their thresholds, as well as the interpretation of these metrics in order to understand the actual debts and derive countermeasures. Measuring and interpreting dynamic indicators of TD is especially difficult to implement for companies because the related metrics are highly dependent on runtime context and thus difficult to generalize. We also sketch initial solution ideas by presenting examples of dynamic indicators for TD and outline directions for future work

    Assessing the Impact of Active Guidance for Defect Detection: A Replicated Experiment

    No full text
    Scenario-based reading (SBR) techniques have been proposed as an alternative to checklists to support the inspectors throughout the reading process in the form of operational scenarios. Many studies have been performed to compare these techniques regarding their impact on the inspector performance. However, most of the existing studies have compared generic checklists to a set of specific reading scenarios, thus confounding the effects of two SBR key factors: separation of concerns and active guidance. In a previous work we have preliminarily conducted a repeated case study at the University of Kaiserslautern to evaluate the impact of active guidance on inspection performance. Specifically, we compared reading scenarios and focused checklists, which were both characterized as being perspectivebased. The only difference between the reading techniques was the active guidance provided by the reading scenarios. We now have replicated the initial study with a controlled experiment using as subjects 43 graduate students in computer science at University of Bari. We did not find evidence that active guidance in reading techniques affects the effectiveness or the efficiency of defect detection. However, inspectors showed a better acceptance of focused checklists than reading scenarios

    Prozessverbesserung über Fehlerstrommessung bei einem mittelständischen Unternehmen

    No full text
    Eine konstant hohe Produktqualität ist insbesondere für kleine und mittelständische Unternehmen (KMUs) wichtig, um Kundenzufriedenheit gewährleisten zu können. Dabei ist ein guter Entwicklungs- und Qualitätssicherungsprozess maßgeblich für die Erreichung einer hohen Softwarequalität verantwortlich. Die systematische Fehlermessung nimmt dabei eine Schlüsselrolle ein, da nur so empirisch zu ermitteln ist, wie effektiv bestehende Qualitätssicherungs-Prozesse sind, welche Fehlerarten durch welche Prozesse gefunden werden und welches Verbesserungspotential gegeben ist. In diesem Beitrag werden das Vorgehen und die Erfahrungen bei der Definition und Einführung eines Messprogramms zur Erfassung eines Fehlerstrommodells (FSM) bei einem mittelständischen Unternehmen beschrieben. Das Messprogramm ist eingeführt und wird aktiv betrieben. Dies ist unseres Wissens die erste Dokumentation der Einführung eines FSM bei einer KMU. Durch die gewonnenen Erfahrungen konnte der Definitions- und Einführungsprozess für KMUs verfeinert werden. Trotz des frühen Zeitpunkts und des durch die statistische Evaluierung aufgezeigten Verbesserungspotentials hinsichtlich der Qualität des Schemas und der Fehlererfassung können schon erste interessante Ergebnisse hinsichtlich der Fehlerkorrekturkosten unterschiedlicher Fehlerarten und Entdeckungszeitpunkte präsentiert werden
    corecore