2,532 research outputs found

    Empirical Evaluation of Semi-Supervised NaĂŻve Bayes for Active Learning

    Get PDF
    This thesis describes an empirical evaluation of semi-supervised and active learning individually, and in combination for the naĂŻve Bayes classifier. Active learning aims to minimise the amount of labelled data required to train the classifier by using the model to direct the labelling of the most informative unlabelled examples. The key difficulty with active learning is that the initial model often gives a poor direction for labelling the unlabelled data in the early stages. However, using both labelled and unlabelled data with semi-supervised learning might be achieve a better initial model because the limited labelled data are augmented by the information in the unlabelled data. In this thesis, a suite of benchmark datasets is used to evaluate the benefit of semi-supervised learning and presents the learning curves for experiments to compare the performance of each approach. First, we will show that the semi-supervised naĂŻve Bayes does not significantly improve the performance of the naĂŻve Bayes classifier. Subsequently, a down-weighting technique is used to control the influence of the unlabelled data, but again this does not improve performance. In the next experiment, a novel algorithm is proposed by using a sigmoid transformation to recalibrate the overly confident naĂŻve Bayes classifier. This algorithm does not significantly improve on the naĂŻve Bayes classifier, but at least does improve the semi-supervised naĂŻve Bayes classifier. In the final experiment we investigate the effectiveness of the combination of active and semi-supervised learning and empirically illustrate when the combination does work, and when does not

    Advanced Data Analytics for Systematic Review Creation and Update

    Get PDF

    Feature extraction and classification of spam emails

    Get PDF

    HoloDetect: Few-Shot Learning for Error Detection

    Full text link
    We introduce a few-shot learning framework for error detection. We show that data augmentation (a form of weak supervision) is key to training high-quality, ML-based error detection models that require minimal human involvement. Our framework consists of two parts: (1) an expressive model to learn rich representations that capture the inherent syntactic and semantic heterogeneity of errors; and (2) a data augmentation model that, given a small seed of clean records, uses dataset-specific transformations to automatically generate additional training data. Our key insight is to learn data augmentation policies from the noisy input dataset in a weakly supervised manner. We show that our framework detects errors with an average precision of ~94% and an average recall of ~93% across a diverse array of datasets that exhibit different types and amounts of errors. We compare our approach to a comprehensive collection of error detection methods, ranging from traditional rule-based methods to ensemble-based and active learning approaches. We show that data augmentation yields an average improvement of 20 F1 points while it requires access to 3x fewer labeled examples compared to other ML approaches.Comment: 18 pages

    Machine Learning Based Physical Activity Extraction for Unannotated Acceleration Data

    Get PDF
    Sensor based human activity recognition (HAR) is an emerging and challenging research area. The physical activity of people has been associated with many health benefits and even reducing the risk of different diseases. It is possible to collect sensor data related to physical activities of people with wearable devices and embedded sensors, for example in smartphones and smart environments. HAR has been successful in recognizing physical activities with machine learning methods. However, it is a critical challenge to annotate sensor data in HAR. Most existing approaches use supervised machine learning methods which means that true labels need be given to the data when training a machine learning model. Supervised deep learning methods have outperformed traditional machine learning methods in HAR but they require an even more extensive amount of data and true labels. In this thesis, machine learning methods are used to develop a solution that can recognize physical activity (e.g., walking and sedentary time) from unannotated acceleration data collected using a wearable accelerometer device. It is shown to be beneficial to collect and annotate data from physical activity of only one person. Supervised classifiers can be trained with small, labeled acceleration data and more training data can be obtained in a semi-supervised setting by leveraging knowledge from available unannotated data. The semi-supervised En-Co-Training method is used with the traditional supervised machine learning methods K-nearest Neighbor and Random Forest. Also, intensities of activities are produced by the cut point analysis of the OMGUI software as reference information and used to increase confidence of correctly selecting pseudo-labels that are added to the training data. A new metric is suggested to help to evaluate reliability when no true labels are available. It calculates a fraction of predictions that have a correct intensity out of all the predictions according to the cut point analysis of the OMGUI software. The reliability of the supervised KNN and RF classifiers reaches 88 % accuracy and the C-index value 0,93, while the accuracy of the K-means clustering remains 72 % when testing the models on labeled acceleration data. The initial supervised classifiers and the classifiers retrained in a semi-supervised setting are tested on unlabeled data collected from 12 people and measured with the new metric. The overall results improve from 96-98 % to 98-99 %. The results with more challenging activities to the initial classifiers, taking a walk improve from 55-81 % to 67-81 % and jogging from 0-95 % to 95-98 %. It is shown that the results of the KNN and RF classifiers consistently increase in the semi-supervised setting when tested on unannotated, real-life data of 12 people

    Interacting meaningfully with machine learning systems: Three experiments

    Get PDF
    Although machine learning is becoming commonly used in today's software, there has been little research into how end users might interact with machine learning systems, beyond communicating simple “right/wrong” judgments. If the users themselves could work hand-in-hand with machine learning systems, the users’ understanding and trust of the system could improve and the accuracy of learning systems could be improved as well. We conducted three experiments to understand the potential for rich interactions between users and machine learning systems. The first experiment was a think-aloud study that investigated users’ willingness to interact with machine learning reasoning, and what kinds of feedback users might give to machine learning systems. We then investigated the viability of introducing such feedback into machine learning systems, specifically, how to incorporate some of these types of user feedback into machine learning systems, and what their impact was on the accuracy of the system. Taken together, the results of our experiments show that supporting rich interactions between users and machine learning systems is feasible for both user and machine. This shows the potential of rich human–computer collaboration via on-the-spot interactions as a promising direction for machine learning systems and users to collaboratively share intelligence

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science
    • …
    corecore