13 research outputs found

    Workflow Management versus Case Handling: Results from a Controlled Software Experiment

    Get PDF
    Business Process Management (BPM) technology has become an important instrument for improving process performance. When considering its use, however, enterprises typically have to rely on vendor promises or qualitative reports. What is still missing and what is also demanded by IT decision makers are quantitative evaluations based on empirical and experimental research. This paper picks up this demand and illustrates how experimental research can be applied in the BPM field. The conducted experiment compares efforts for implementing a sample business process either based on standard workflow technology or on a case handling system. We motivate and describe the experiment design, discuss threats for the validity of experiment results (as well as risk mitigations), and present experiment results. In general, more experimental research is needed in order to obtain more valid data on the various aspects and effects of BPM technology and tools

    Un Marco Metodológico para Evaluar Técnicas y Herramientas para Pruebas del Software

    Get PDF
    Actualmente existe una necesidad real en el sector industrial de disponer de conocimiento para decidir qué técnicas de pruebas deben usarse según los objetivos de pruebas, y para conocer cómo de usables (efectivas, eficientes y satisfactorias) pueden llegar a ser estas técnicas. Sin embargo, estas guías en realidad no existen. Podríamos plantearnos como medio para conseguirlas el realizar estudios comparativos de evaluación de técnicas de pruebas y herramientas basadas en casos de estudio, sin embargo, estos estudios también son poco viables por falta de disponibilidad de dichos casos. En este trabajo, daremos un primer paso a crear una primera aproximación a un marco de trabajo de evaluación que permita simplificar el diseño de los casos de estudio a comparar en las herramientas de pruebas de software, haciendo los resultados lo más precisos, legibles y fáciles de comparar.There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to compare.European Commission ICT-257574Ministerio de Ciencia e Innovación TIN2010-12312-

    Reporting experiments to satisfy professionals information needs

    Get PDF
    Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers? information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly.We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology?s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers

    Building knowledge through families of experiments

    Full text link

    Repeatable Software Engineering Experiments for Comparing Defect-Detection Techniques

    No full text
    Techniques for detecting defects in source code are fundamental to the success of any software development approach. A software development organization therefore needs to understand the utility of techniques such as reading or testing in its own environment. Controlled experiments have proven to be an effective means for evaluating software engineering techniques and gaining the necessary understanding about their utility. This paper presents a characterization scheme for controlled experiments that evaluate defect-detection techniques. The characterization scheme permits the comparison of results from similar experiments and establishes a context for crossexperiment analysis of those results. The characterization scheme is used to structure a detailed survey of four experiments that compared reading and testing techniques for detecting defects in source code. We encourage educators, researchers, and practition- Also with the Department of Computer Science, University of Kaiserslautern, 67653 Kaiserslautern, Germany

    MiSFIT: Mining Software Fault Information and Types

    Get PDF
    As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes

    Investigating Effective Inspection of Object-Oriented Code

    Get PDF
    Since the development of software inspection over twenty-five years ago it has become established as an effective means of detecting defects. Inspections were originally developed at a time when the procedural paradigm was dominant but, with the Object- Oriented (OO) paradigm growing in influence and use, there now exists a lack of guidance on how to apply inspections to OO systems. Object-oriented and procedural languages differ not only in their syntax but also in a number of more profound ways - the encapsulation of data and associated functionality, the common use of inheritance, and the concepts of polymorphism and dynamic binding. These factors influence the way that modules (classes) are created in OO systems, which in turn influences the way that OO systems are structured and execute. Failure to take this into account may hinder the application of inspections to OO code. This thesis shows that the way in which the objectoriented paradigm distributes related functionality can have a serious impact on code inspection and, to address this problem, it develops and empirically evaluates three code reading techniques

    Analyse comparative du test exploratoire et du test scénarisé : étude empirique

    Get PDF
    Le test exploratoire (TE) est défini comme l'apprentissage, la conception et l'exécution simultanés des tests, tout à fait l'opposé du test scénarisé (TS) prédéfini. L'applicabilité de cette nouvelle approche ne cesse pas d'augmenter dans l'industrie du test de logiciel. Malgré cette expansion et le succès de quelques entreprises qui s'ouvrent dans le domaine de développement du logiciel dans ses expériences d'adoption et d'utilisation de TE, les contextes et les facteurs favorables pour l'adoption de l'approche dans une méthodologie de test ne sont pas toujours bien établis. L'absence des preuves claires de sa productivité annoncée par quelques praticiens dans la littérature s'ajoute à la problématique. Ce travail est une étude exploratoire visant deux objectifs. Premièrement, étudier et analyser les contextes favorisant l'utilisation de TE comme une méthodologie primaire de test à la place des tests scénarisés en élaborant une analyse comparative entre le TE et le TS. Deuxièmement, évaluer sa productivité dans une étude empirique par rapport au TS. Nous avons élaboré un cadre conceptuel de comparaison dans lequel nous avons identifié cinq dimensions: o Les caractéristiques d'utilisation: les raisons de l'utilisation, les caractéristiques du logiciel, le type d'environnement d'affaires, les ressources financières et le temps disponible pour les tests; o Les caractéristiques de gestion: la planifIcation, le contrôle et le suivi des tests, la communication dans le projet de test et la relation avec le client; o Les caractéristiques techniques: les activités de test, l'oracle de test, les risques du logiciel et la couverture de test; o Les caractéristiques du personnel: les caractéristiques des testeurs, la culture de l'organisation; o La productivité: le nombre de défauts détectés, l'importance de défauts détectés. Ce cadre a été utilisé comme base dans l'analyse comparative du TE et du TS. Dans cette analyse, nous avons comparé une approche disciplinée de TS guidé par les patrons de documentation IEEE 829 et une approche libre, semi planifiée de TE représentée par l'approche Session Based Exploratory Testing (SBET). Dans cette comparaison, la productivité a été évaluée par le biais d'une étude empirique que nous avons mise en oeuvre, dans les laboratoires informatiques de L'UQÀM. Malgré les limites du contexte de cette étude empirique, nous avons pu dégager quelques conclusions utiles. Les résultats permettent de montrer que certains facteurs de contexte du projet de test peuvent empêcher l'utilisation de TE comme une méthode principale de test. Nous avons conclu que l'absence de contrôle de couverture de test restreint en plus le type des projets où le TE pourrait être utilisé. Aussi, l'expertise et les qualifications nécessaires pour exécuter le TE pourraient empêcher son utilisation dans les projets de tests où ces qualifications sont manquantes. Les résultats de l'étude empirique ont supporté l'hypothèse relative à l'importance des défauts détectés. D'autres recherches quantitatives sur la productivité de TE sont nécessaires, dont ce travail pourra servir comme point de départ. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Test, Test scénarisé, Test exploratoire, Session Based Exploratory Test (SBET)

    Graph Based Verification of Software Evolution Requirements

    Get PDF
    Due to market demands and changes in the environment, software systems have to evolve. However, the size and complexity of the current software systems make it time consuming to incorporate changes. During our collaboration with the industry, we observed that the developers spend much time on the following evolution problems: designing runtime reconfigurable software, obeying software design constraints while coping with evolution, reusing old software solutions for new evolution problems. This thesis presents 3 processes and tool suits that aid the developers/designers when tackling these problems.\ud The first process and tool set allow early verification of runtime reconfiguration requirements. In this process the UML models are converted into a graph-based model. The execution semantics of UML are modeled by graph transformation rules. Using these graph transformation rules, the execution of the UML models is simulated. The simulation generates a state-space showing all possible reconfigurations. The runtime reconfiguration requirements are expressed by computational tree logic or with a visual state-based language, which are verified over the generated state-space. When the verification fails a feedback on the problem is provided.\ud The second process and tool set are developed for computer aided detection of static program constraint violations. We developed a modeling language called Source Code Modeling Language (SCML) in which program elements from the source code can be represented. In the proposed process for constraint violation detection, the source code is converted into SCML models. The constraint detection is realized by graph transformation rules. The rules detect the violation and extract information from the SCML model to provide feedback on the location of the problem.\ud The third process and tool set provide computer aided verification of whether a design idiom can be used to implement a change request. The developers tend to implement evolution requests using software structures that are familiar to them; called design idioms. Graph transformations are used for detecting whether the constraints of the design idiom are satisfied or not. For a given design idiom and given source files in SCML, the implementation of the idiom is simulated. If the simulation succeeds, then the models are converted to source code.\u
    corecore