3,201 research outputs found

    Uso de revisiones sistemáticas como estrategia de generación de conocimientos para mejora continua

    Get PDF
    En el presente trabajo se describe cómo las Revisiones Sistemáticas (una estrategia científica que apunta a generar piezas de conocimiento de alto nivel) pueden ayudar al proceso de “mejora continua” que propone CMMI a las empresas que desean certificar los más altos niveles de madurez.Workshop de Ingeniería de Software y Bases de Datos (WISBD)Red de Universidades con Carreras en Informática (RedUNCI

    Uso de revisiones sistemáticas como estrategia de generación de conocimientos para mejora continua

    Get PDF
    En el presente trabajo se describe cómo las Revisiones Sistemáticas (una estrategia científica que apunta a generar piezas de conocimiento de alto nivel) pueden ayudar al proceso de “mejora continua” que propone CMMI a las empresas que desean certificar los más altos niveles de madurez.Workshop de Ingeniería de Software y Bases de Datos (WISBD)Red de Universidades con Carreras en Informática (RedUNCI

    Developing a catalogue of errors and evaluating its impact on software development

    Get PDF
    The development of quality software is of paramount importance, yet this has been and continues to be an elusive goal for software engineers. Delivered software often fails due to errors that are injected during its development. Correcting these errors early in the development or preventing them altogether can, therefore, be considered as one way to improve software quality. In this thesis, the development of a Catalogue of Errors is described. Field studies with senior software engineering students are used to confirm that developers using the Catalogue of Errors commit fewer errors in their development artifacts. The impact of the Catalogue of Errors on productivity is also examined

    An eye-tracking study on the role of scan time in finding source code defects

    Full text link

    Foundations of Empirical Software Engineering: The Legacy of Victor R. Basili

    Get PDF
    This book captures the main scientific contributions of Victor R. Basili, who has significantly shaped the field of empirical software engineering from its very start. He was the first to claim that software engineering needed to follow the model of other physical sciences and develop an experimental paradigm. By working on this postulate, he developed concepts that today are well known and widely used, including the Goal-Question-Metric method, the Quality-Improvement paradigm, and the Experience Factory. He is one of the few software pioneers who can aver that their research results are not just scientifically acclaimed but are also used as industry standards. On the occasion of his 65th birthday, celebrated with a symposium in his honor at the International Conference on Software Engineering in St. Louis, MO, USA in May 2005, Barry Boehm, Hans Dieter Rombach, and Marvin V. Zelkowitz, each a long-time collaborator of Victor R. Basili, selected the 20 most important research papers of their friend, and arranged these according to subject field. They then invited renowned researchers to write topical introductions. The result is this commented collection of timeless cornerstones of software engineering, hitherto available only in scattered publications

    Proceedings of the Twenty-Third Annual Software Engineering Workshop

    Get PDF
    The Twenty-third Annual Software Engineering Workshop (SEW) provided 20 presentations designed to further the goals of the Software Engineering Laboratory (SEL) of the NASA-GSFC. The presentations were selected on their creativity. The sessions which were held on 2-3 of December 1998, centered on the SEL, Experimentation, Inspections, Fault Prediction, Verification and Validation, and Embedded Systems and Safety-Critical Systems

    Building knowledge through families of experiments

    Full text link

    Software Engineering Laboratory Series: Proceedings of the Twentieth Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document

    Empirically-Grounded Construction of Bug Prediction and Detection Tools

    Get PDF
    There is an increasing demand on high-quality software as software bugs have an economic impact not only on software projects, but also on national economies in general. Software quality is achieved via the main quality assurance activities of testing and code reviewing. However, these activities are expensive, thus they need to be carried out efficiently. Auxiliary software quality tools such as bug detection and bug prediction tools help developers focus their testing and reviewing activities on the parts of software that more likely contain bugs. However, these tools are far from adoption as mainstream development tools. Previous research points to their inability to adapt to the peculiarities of projects and their high rate of false positives as the main obstacles of their adoption. We propose empirically-grounded analysis to improve the adaptability and efficiency of bug detection and prediction tools. For a bug detector to be efficient, it needs to detect bugs that are conspicuous, frequent, and specific to a software project. We empirically show that the null-related bugs fulfill these criteria and are worth building detectors for. We analyze the null dereferencing problem and find that its root cause lies in methods that return null. We propose an empirical solution to this problem that depends on the wisdom of the crowd. For each API method, we extract the nullability measure that expresses how often the return value of this method is checked against null in the ecosystem of the API. We use nullability to annotate API methods with nullness annotation and warn developers about missing and excessive null checks. For a bug predictor to be efficient, it needs to be optimized as both a machine learning model and a software quality tool. We empirically show how feature selection and hyperparameter optimizations improve prediction accuracy. Then we optimize bug prediction to locate the maximum number of bugs in the minimum amount of code by finding the most cost-effective combination of bug prediction configurations, i.e., dependent variables, machine learning model, and response variable. We show that using both source code and change metrics as dependent variables, applying feature selection on them, then using an optimized Random Forest to predict the number of bugs results in the most cost-effective bug predictor. Throughout this thesis, we show how empirically-grounded analysis helps us achieve efficient bug prediction and detection tools and adapt them to the characteristics of each software project
    corecore