297 research outputs found

    A Review of Metrics and Modeling Techniques in Software Fault Prediction Model Development

    Get PDF
    This paper surveys different software fault predictions progressed through different data analytic techniques reported in the software engineering literature. This study split in three broad areas; (a) The description of software metrics suites reported and validated in the literature. (b) A brief outline of previous research published in the development of software fault prediction model based on various analytic techniques. This utilizes the taxonomy of analytic techniques while summarizing published research. (c) A review of the advantages of using the combination of metrics. Though, this area is comparatively new and needs more research efforts

    Mining heterogeneous information graph for health status classification

    Get PDF
    In the medical domain, there exists a large volume of data from multiple sources such as electronic health records, general health examination results, and surveys. The data contain useful information reflecting people’s health and provides great opportunities for studies to improve the quality of healthcare. However, how to mine these data effectively and efficiently still remains a critical challenge. In this paper, we propose an innovative classification model for knowledge discovery from patients’ personal health repositories. By based on analytics of massive data in the National Health and Nutrition Examination Survey, the study builds a classification model to classify patients’health status and reveal the specific disease potentially suffered by the patient. This paper makes significant contributions to the advancement of knowledge in data mining with an innovative classification model specifically crafted for domain-based data. Moreover, this research contributes to the healthcare community by providing a deep understanding of people’s health with accessibility to the patterns in various observations

    On Using Active Learning and Self-Training when Mining Performance Discussions on Stack Overflow

    Full text link
    Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.Comment: Preprint of paper accepted for the Proc. of the 21st International Conference on Evaluation and Assessment in Software Engineering, 201
    • …
    corecore