50 research outputs found

    Empirical Investigation on Agile Methods Usage: Issues Identified from Early Adopters in Malaysia

    Get PDF
    Agile Methods are a set of software practices that can help to produce products faster and at the same time deliver what customers want. Despite the benefits that Agile methods can deliver, however, we found few studies from the Southeast Asia region, particularly Malaysia. As a result, less empirical evidence can be obtained in the country making its implementation harder. To use a new method, experience from other practitioners is critical, which describes what is important, what is possible and what is not possible concerning Agile. We conducted a qualitative study to understand the issues faced by early adopters in Malaysia where Agile methods are still relatively new. The initial study involves 13 participants including project managers, CEOs, founders and software developers from seven organisations. Our study has shown that social and human aspects are important when using Agile methods. While technical aspects have always been considered to exist in software development, we found these factors to be less important when using Agile methods. The results obtained can serve as guidelines to practitioners in the country and the neighbouring regions

    Software defect prediction: do different classifiers find the same defects?

    Get PDF
    Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.Peer reviewedFinal Published versio

    ATLAS detector and physics performance: Technical Design Report, 1

    Get PDF

    Zagrozenie powodziowe terenow ponizej zapory wodnej w warunkach wystapienia ekstremalnych zjawisk hydrologicznych

    No full text

    Silting of small water reservoirs and quality of sediments

    No full text

    The Importance of the Correlation in Crossover Experiments

    No full text
    Context: In empirical software engineering, crossover designs are popular for experiments comparing software engineering techniques that must be undertaken by human participants. However, their value depends on the correlation (r) between the outcome measures on the same participants. Software engineering theory emphasizes the importance of individual skill differences, so we would expect the values of r to be relatively high. However, few researchers have reported the values of r. Goal: To investigate the values of r found in software engineering experiments. Method: We undertook simulation studies to investigate the theoretical and empirical properties of r. Then we investigated the values of r observed in 35 software engineering crossover experiments. Results: The level of r obtained by analysing our 35 crossover experiments was small. Estimates based on means, medians, and random effect analysis disagreed but were all between 0.2 and 0.3. As expected, our analyses found large variability among the individual r estimates for small sample sizes, but no indication that r estimates were larger for the experiments with larger sample sizes that exhibited smaller variability. Conclusions: Low observed r values cast doubts on the validity of crossover designs for software engineering experiments. However, if the cause of low r values relates to training limitations or toy tasks, this affects all Software Engineering (SE) experiments involving human participants. For all human-intensive SE experiments, we recommend more intensive training and then tracking the improvement of participants as they practice using specific techniques, before formally testing the effectiveness of the techniques

    Empirical Evidence Principle and Joint Engagement Practice to Introduce XP

    No full text
    corecore