50 research outputs found

    The impact of training data characteristics on ensemble classification of land cover

    Get PDF
    Supervised classification of remote sensing imagery has long been recognised as an essential technology for large area land cover mapping. Remote sensing derived land cover and forest classification maps are important sources of information for understanding environmental processes and informing natural resource management decision making. In recent years, the supervised transformation of remote sensing data into thematic products has been advanced through the introduction and development of machine learning classification techniques. Applied to a variety of science and engineering problems over the past twenty years (Lary et al., 2016), machine learning provides greater accuracy and efficiency than traditional parametric classifiers, capable of dealing with large data volumes across complex measurement spaces. The Random forest (RF) classifier in particular, has become popular in the remote sensing community, with a range of commonly cited advantages, including its low parameterisation requirements, excellent classification results and ability to handle noisy observation data and outliers, in a complex measurement space and small training data relative to the study area size. In the context of large area land cover classification for forest cover, using multisource remote sensing and geospatial data, this research sets out to examine proposed advantages of the RF classifier - insensitivity to training data noise (mislabelling) and handling training data class imbalance. Through margin theory, the research also investigates the utility of ensemble learning – in which multiple base classifiers are combined to reduce generalisation error in classification – as a means of designing more efficient classifiers, improving classification performance, and reducing reference (training and test) data redundancy. The first part of the thesis (chapters 2 and 3) introduces the experimental setting and data used in the research, including a description (in chapter 2) of the sampling framework for the reference data used in classification experiments that follow. Chapter 3 evaluates the performance of the RF classifier applied across 7.2 million hectares of public land study area in Victoria, Australia. This chapter describes an open-source framework for deploying the RF classifier over large areas and processing significant volumes of multi-source remote sensing and ancillary spatial data. The second part of this thesis (research chapters 4 through 6) examines the effect of training data characteristics (class imbalance and mislabelling) on the performance of RF, and explores the application of the ensemble margin, as a means of both examining RF classification performance, and informing training data sampling to improve classification accuracy. Results of binary and multiclass experiments described in chapter 4, provide insights into the behaviour of RF, in which training data are not evenly distributed among classes and contain systematically mislabelled instances. Results show that while the error rate of the RF classifier is relatively insensitive to mislabelled training data (in the multiclass experiment, overall 78.3% Kappa with no mislabelled instances to 70.1% with 25% mislabelling in each class), the level of associated confidence falls at a faster rate than overall accuracy with increasing rates of mislabelled training data. This study section also demonstrates that imbalanced training data can be introduced to reduce error in classes that are most difficult to classify. The relationship between per-class and overall classification performance and the diversity of members in a RF ensemble classifier, is explored through experiments presented in chapter 5. This research examines ways of targeting particular training data samples to induce RF ensemble diversity and improve per-class and overall classification performance and efficiency. Through use of the ensemble margin, this study offers insights into the trade-off between ensemble classification accuracy and diversity. The research shows that boosting diversity among RF ensemble members, by emphasising the contribution of lower margin training instances used in the learning process, is an effective means of improving classification performance, particularly for more difficult or rarer classes, and is a way of reducing information redundancy and improving the efficiency of classification problems. Research chapter 6 looks at the application of the RF classifier for calculating Landscape Pattern Indices (LPIs) from classification prediction maps, and examines the sensitivity of these indices to training data characteristics and sampling based on the ensemble margin. This research reveals a range of commonly used LPIs to have significant sensitivity to training data mislabelling in RF classification, as well as margin-based training data sampling. In conclusion, this thesis examines proposed advantages of the popular machine learning classifier, Random forests - the relative insensitivity to training data noise (mislabelling) and its ability to handle class imbalance. This research also explores the utility of the ensemble margin for designing more efficient classifiers, measuring and improving classification performance, and designing ensemble classification systems which use reference data more efficiently and effectively, with less data redundancy. These findings have practical applications and implications for large area land cover classification, for which the generation of high quality reference data is often a time consuming, subjective and expensive exercise

    Detecting Prominent Features and Classifying Network Traffic for Securing Internet of Things Based on Ensemble Methods

    Get PDF
    abstract: Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication in the network among various devices and systems. Despite being protected with authentication and encryption, the network still needs to be protected against cyber-attacks. For this, the network traffic has to be closely monitored and should detect anomalies and intrusions. Intrusion detection can be categorized as a network traffic classification problem in machine learning. Existing network traffic classification methods require a lot of training and data preprocessing, and this problem is more serious if the dataset size is huge. In addition, the machine learning and deep learning methods that have been used so far were trained on datasets that contain obsolete attacks. In this thesis, these problems are addressed by using ensemble methods applied on an up to date network attacks dataset. Ensemble methods use multiple learning algorithms to get better classification accuracy that could be obtained when the corresponding learning algorithm is applied alone. This dataset for network traffic classification has recent attack scenarios and contains over fifteen attacks. This approach shows that ensemble methods can be used to classify network traffic and detect intrusions with less training times of the model, and lesser pre-processing without feature selection. In addition, this thesis also shows that only with less than ten percent of the total features of input dataset will lead to similar accuracy that is achieved on whole dataset. This can heavily reduce the training times and classification duration in real-time scenarios.Dissertation/ThesisMasters Thesis Computer Science 201

    Machine learning tecniques applied to hydrate failure detection on production lines

    Get PDF
    The present work proposes a methodology that covers the whole process of classifying hydrate formation-related faults on production lines of an offshore oil platform. Three datasets are analyzed in this work, where each one of them is composed of a variety of sensor measurements related to the wells of a different offshore oil platform. Our methodology goes through each step of dataset cleaning, which includes: identification of numerical and categorical tags, removal of spurious values and outliers, treatment of missing data by interpolation and the identification of relevant faults and tags on the platform. The present work designs a framework that puts together many Machine Learning classic techniques to perform the failure identification. The system is composed of three major blocks: the first block performs feature extraction: as the input data is a set of time-series signals we represent each signal using its statistical metrics computed over a sliding window; the second block maps the previous block output to a more suitable space, this transformation uses the z-score normalization and the Principal Components Analysis (PCA); the last block is the classifier, the one we adopted was the Random Forest classifier due to its simple tuning and excellent performance. We also propose a technique to increase the reliability of the normal operation data. When handling a database composed by real data, it is usual to face a lot of mislabeled data, which can significantly jeopardize the model performance. Therefore, we deploy a technique to reduce the mislabeled samples, which presented an improvement of 7.93%, on average, reaching over 80% of accuracy in all single-class scenarios.Este trabalho apresenta uma metodologia que cobre todo o desenvolvimento de um sistema de classificação de falhas relacionadas à formação de hidrato em linhas de produção de plataformas de petróleo. Serão utilizadas três bases de dados no desenvolvimento desse trabalho, onde cada uma delas é composta por uma variedade de medidas provenientes de sensores relacionados a poços. Nossa metodologia cobre todas as etapas de limpeza dessas bases: identificação de tags numéricas e categóricas; remoção de valores espúrios e de outliers; tratamento de dados faltantes através de interpolação; e a identificação de falhas e tags relevantes na plataforma. Desenvolvemos um framework formado por diversas técnicas clássicas da área de Aprendizado de Máquina. O sistema proposto é composto por três grandes blocos: o primeiro irá extrair as características estatísticas de cada sinal de entrada através de uma janela deslizante; o segundo bloco irá mapear a saída do bloco anterior em um espaço mais apropriado através de duas transformações: z-score e Principal Components Analysis (PCA); o último bloco é o classificador, que no caso optamos por ser o classificador Random Forest. Também propomos uma técnica para aumentar a confiabilidade das amostras referentes ao estado de operação normal da plataforma. Quando lidamos com dados reais, é muito comum que muitas amostras estejam marcadas erradas, ou seja, os seus rótulos não refletem o estado real de operação da plataforma. Para suavizar esse efeito indesejado, desenvolvemos um método para remover amostras com marcações erradas, com o qual melhoramos a performance do modelo em 7,93%, na média, alcançando mais de 80% de acurácia em todos os cenários de classificação de uma única classe

    Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey

    Full text link
    Image classification systems recently made a giant leap with the advancement of deep neural networks. However, these systems require an excessive amount of labeled data to be adequately trained. Gathering a correctly annotated dataset is not always feasible due to several factors, such as the expensiveness of the labeling process or difficulty of correctly classifying data, even for the experts. Because of these practical challenges, label noise is a common problem in real-world datasets, and numerous methods to train deep neural networks with label noise are proposed in the literature. Although deep neural networks are known to be relatively robust to label noise, their tendency to overfit data makes them vulnerable to memorizing even random noise. Therefore, it is crucial to consider the existence of label noise and develop counter algorithms to fade away its adverse effects to train deep neural networks efficiently. Even though an extensive survey of machine learning techniques under label noise exists, the literature lacks a comprehensive survey of methodologies centered explicitly around deep learning in the presence of noisy labels. This paper aims to present these algorithms while categorizing them into one of the two subgroups: noise model based and noise model free methods. Algorithms in the first group aim to estimate the noise structure and use this information to avoid the adverse effects of noisy labels. Differently, methods in the second group try to come up with inherently noise robust algorithms by using approaches like robust losses, regularizers or other learning paradigms

    Labeling large scale social media data using budget-driven One-class SVM classification

    Get PDF
    The social media classification problems draw more and more attention in the past few years. With the rapid development of Internet and the popularity of computers, there is astronomical amount of information in the social network (social media platforms). The datasets are generally large scale and are often corrupted by noise. The presence of noise in training set has strong impact on the performance of supervised learning (classification) techniques. A budget-driven One-class SVM approach is presented in this thesis that is suitable for large scale social media data classification. Our approach is based on an existing online One-class SVM learning algorithm, referred as STOCS (Self-Tuning One-Class SVM) algorithm. To justify our choice, we first analyze the noise-resilient ability of STOCS using synthetic data. The experiments suggest that STOCS is more robust against label noise than several other existing approaches. Next, to handle big data classification problem for social media data, we introduce several budget driven features, which allow the algorithm to be trained within limited time and under limited memory requirement. Besides, the resulting algorithm can be easily adapted to changes in dynamic data with minimal computational cost. Compared with two state-of-the-art approaches, Lib-Linear and kNN, our approach is shown to be competitive with lower requirements of memory and time

    LSTM Models to Support the Selective Antibiotic Treatment Strategy of Dairy Cows in the Dry Period

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceUdder inflammation, known as mastitis, is the most significant disease of dairy cows worldwide, invoking substantial economic losses. The current common strategy to reduce this problem is the prophylactic administration of antibiotics treatment of cows during their dry period. Paradoxically, the indiscriminate use of antibiotics in animals and humans has been the leading cause of antimicrobial resistance, a concern in several public health organizations. In light of these assumptions, at the beginning of 2022, the European Union made it illegal to routinely administer antibiotics on farms, with Regulation 2019/6 of 11 December 2018. Considering this new scenario, the objective of this study was to produce a model that supports the decisions of veterinarians when administering antibiotics in the dry period of dairy cows. Deep learning models were used, namely LSTM layers that operate with dynamic features from milk recordings and a dense layer that uses static features. Two approaches were chosen to deal with this problem. The first is based on a binary classification model that considers the occurrence of mastitis within 60 days after calving. The second approach was a multiclass classification model based on veterinary expert judgment. In each approach, three models were implemented, a Vanilla LSTM, a Stacked LSTM, and a Stacked LSTM with a dense layer working in parallel. The best performances from binary and multiclass approaches were 65% and 84% accuracy, respectively. It was possible to conclude that the models of the multiclass classification approach had better performance than the other classification. The capture of long- and short-term dependencies in the LSTM models, especially with the combination of static features, obtained promising results, which will undoubtedly contribute to producing a machine learning system with a prompt and affordable response, allowing for a reduction in the administration of antibiotics in dairy cows to the strictly necessary

    Machine learning methods for the study of cybersickness: a systematic review

    Get PDF
    This systematic review offers a world-first critical analysis of machine learning methods and systems, along with future directions for the study of cybersickness induced by virtual reality (VR). VR is becoming increasingly popular and is an important part of current advances in human training, therapies, entertainment, and access to the metaverse. Usage of this technology is limited by cybersickness, a common debilitating condition experienced upon VR immersion. Cybersickness is accompanied by a mix of symptoms including nausea, dizziness, fatigue and oculomotor disturbances. Machine learning can be used to identify cybersickness and is a step towards overcoming these physiological limitations. Practical implementation of this is possible with optimised data collection from wearable devices and appropriate algorithms that incorporate advanced machine learning approaches. The present systematic review focuses on 26 selected studies. These concern machine learning of biometric and neuro-physiological signals obtained from wearable devices for the automatic identification of cybersickness. The methods, data processing and machine learning architecture, as well as suggestions for future exploration on detection and prediction of cybersickness are explored. A wide range of immersion environments, participant activity, features and machine learning architectures were identified. Although models for cybersickness detection have been developed, literature still lacks a model for the prediction of first-instance events. Future research is pointed towards goal-oriented data selection and labelling, as well as the use of brain-inspired spiking neural network models to achieve better accuracy and understanding of complex spatio-temporal brain processes related to cybersickness
    corecore