7 research outputs found
Autoencoders and Generative Adversarial Networks for Imbalanced Sequence Classification
Generative Adversarial Networks (GANs) have been used in many different
applications to generate realistic synthetic data. We introduce a novel GAN
with Autoencoder (GAN-AE) architecture to generate synthetic samples for
variable length, multi-feature sequence datasets. In this model, we develop a
GAN architecture with an additional autoencoder component, where recurrent
neural networks (RNNs) are used for each component of the model in order to
generate synthetic data to improve classification accuracy for a highly
imbalanced medical device dataset. In addition to the medical device dataset,
we also evaluate the GAN-AE performance on two additional datasets and
demonstrate the application of GAN-AE to a sequence-to-sequence task where both
synthetic sequence inputs and sequence outputs must be generated. To evaluate
the quality of the synthetic data, we train encoder-decoder models both with
and without the synthetic data and compare the classification model
performance. We show that a model trained with GAN-AE generated synthetic data
outperforms models trained with synthetic data generated both with standard
oversampling techniques such as SMOTE and Autoencoders as well as with state of
the art GAN-based models
Exploiting gan as an oversampling method for imbalanced data augmentation with application to the fault diagnosis of an industrial robot
O diagnóstico inteligente de falhas baseado em aprendizagem máquina geralmente requer
um conjunto de dados balanceados para produzir um desempenho aceitável. No
entanto, a obtenção de dados quando o equipamento industrial funciona com falhas é
uma tarefa desafiante, resultando frequentemente num desequilíbrio entre dados obtidos
em condições nominais e com falhas. As técnicas de aumento de dados são das
abordagens mais promissoras para mitigar este problema.
Redes adversárias generativas (GAN) são um tipo de modelo generativo que consiste
de um módulo gerador e de um discriminador. Por meio de aprendizagem adversária
entre estes módulos, o gerador otimizado pode produzir padrões sintéticos que
podem ser usados para amumento de dados.
Investigamos se asGANpodem ser usadas como uma ferramenta de sobre amostra-
-gem para compensar um conjunto de dados desequilibrado em uma tarefa de diagnóstico
de falhas num manipulador robótico industrial. Realizaram-se uma série de
experiências para validar a viabilidade desta abordagem. A abordagem é comparada
com seis cenários, incluindo o método clássico de sobre amostragem SMOTE. Os resultados
mostram que a GAN supera todos os cenários comparados.
Para mitigar dois problemas reconhecidos no treino das GAN, ou seja, instabilidade
de treino e colapso de modo, é proposto o seguinte.
Propomos uma generalização da GAN de erro quadrado médio (MSE GAN) da
Wasserstein GAN com penalidade de gradiente (WGAN-GP), referida como VGAN (GAN baseado numa matriz V) para mitigar a instabilidade de treino. Além disso,
propomos um novo critério para rastrear o modelo mais adequado durante o treino.
Experiências com o MNIST e no conjunto de dados do manipulador robótico industrial
mostram que o VGAN proposto supera outros modelos competitivos.
A rede adversária generativa com consistência de ciclo (CycleGAN) visa lidar com
o colapso de modo, uma condição em que o gerador produz pouca ou nenhuma variabilidade.
Investigamos a distância fatiada de Wasserstein (SWD) na CycleGAN. O
SWD é avaliado tanto no CycleGAN incondicional quanto no CycleGAN condicional
com e sem mecanismos de compressão e excitação. Mais uma vez, dois conjuntos de
dados são avaliados, ou seja, o MNIST e o conjunto de dados do manipulador robótico
industrial. Os resultados mostram que o SWD tem menor custo computacional e supera
o CycleGAN convencional.Machine learning based intelligent fault diagnosis often requires a balanced data set for
yielding an acceptable performance. However, obtaining faulty data from industrial
equipment is challenging, often resulting in an imbalance between data acquired in
normal conditions and data acquired in the presence of faults. Data augmentation
techniques are among the most promising approaches to mitigate such issue.
Generative adversarial networks (GAN) are a type of generative model consisting
of a generator module and a discriminator. Through adversarial learning between
these modules, the optimised generator can produce synthetic patterns that can be
used for data augmentation.
We investigate whether GAN can be used as an oversampling tool to compensate
for an imbalanced data set in an industrial robot fault diagnosis task. A series of experiments
are performed to validate the feasibility of this approach. The approach is
compared with six scenarios, including the classical oversampling method (SMOTE).
Results show that GAN outperforms all the compared scenarios.
To mitigate two recognised issues in GAN training, i.e., instability and mode collapse,
the following is proposed.
We proposed a generalization of both mean sqaure error (MSE GAN) and Wasserstein
GAN with gradient penalty (WGAN-GP), referred to as VGAN (the V-matrix
based GAN) to mitigate training instability. Also, a novel criterion is proposed to keep
track of the most suitable model during training. Experiments on both the MNIST and the industrial robot data set show that the proposed VGAN outperforms other
competitive models.
Cycle consistency generative adversarial network (CycleGAN) is aiming at dealing
with mode collapse, a condition where the generator yields little to none variability.
We investigate the sliced Wasserstein distance (SWD) for CycleGAN. SWD is evaluated
in both the unconditional CycleGAN and the conditional CycleGAN with and
without squeeze-and-excitation mechanisms. Again, two data sets are evaluated, i.e.,
the MNIST and the industrial robot data set. Results show that SWD has less computational
cost and outperforms conventional CycleGAN
Synthetic Sensor Data for Human Activity Recognition
Human activity recognition (HAR) based on wearable sensors has emerged as an active topic of research in machine learning and human behavior analysis because of its applications in several fields, including health, security and surveillance, and remote monitoring. Machine learning algorithms are frequently applied in HAR systems to learn from labeled sensor data. The effectiveness of these algorithms generally relies on having access to lots of accurately labeled training data. But labeled data for HAR is hard to come by and is often heavily imbalanced in favor of one or other dominant classes, which in turn leads to poor recognition performance.
In this study we introduce a generative adversarial network (GAN)-based approach for HAR that we use to automatically synthesize balanced and realistic sensor data. GANs are robust generative networks, typically used to create synthetic images that cannot be distinguished from real images. Here we explore and construct a model for generating several types of human activity sensor data using a Wasserstein GAN (WGAN). We assess the synthetic data using two commonly-used classifier models, Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). We evaluate the quality and diversity of the synthetic data by training on synthetic data and testing on real sensor data, and vice versa. We then use synthetic sensor data to oversample the imbalanced training set. We demonstrate the efficacy of the proposed method on two publicly available human activity datasets, the Sussex-Huawei Locomotion (SHL) and Smoking Activity Dataset (SAD). We achieve improvements of using WGAN augmented training data over the imbalanced case, for both SHL (0.85 to 0.95 F1-score), and for SAD (0.70 to 0.77 F1-score) when using a CNN activity classifier
NLP Methods in Host-based Intrusion Detection Systems: A Systematic Review and Future Directions
Host based Intrusion Detection System (HIDS) is an effective last line of
defense for defending against cyber security attacks after perimeter defenses
(e.g., Network based Intrusion Detection System and Firewall) have failed or
been bypassed. HIDS is widely adopted in the industry as HIDS is ranked among
the top two most used security tools by Security Operation Centers (SOC) of
organizations. Although effective and efficient HIDS is highly desirable for
industrial organizations, the evolution of increasingly complex attack patterns
causes several challenges resulting in performance degradation of HIDS (e.g.,
high false alert rate creating alert fatigue for SOC staff). Since Natural
Language Processing (NLP) methods are better suited for identifying complex
attack patterns, an increasing number of HIDS are leveraging the advances in
NLP that have shown effective and efficient performance in precisely detecting
low footprint, zero day attacks and predicting the next steps of attackers.
This active research trend of using NLP in HIDS demands a synthesized and
comprehensive body of knowledge of NLP based HIDS. Thus, we conducted a
systematic review of the literature on the end to end pipeline of the use of
NLP in HIDS development. For the end to end NLP based HIDS development
pipeline, we identify, taxonomically categorize and systematically compare the
state of the art of NLP methods usage in HIDS, attacks detected by these NLP
methods, datasets and evaluation metrics which are used to evaluate the NLP
based HIDS. We highlight the relevant prevalent practices, considerations,
advantages and limitations to support the HIDS developers. We also outline the
future research directions for the NLP based HIDS development
Deep Generative Models: The winning key for large and easily accessible ECG datasets?
Large high-quality datasets are essential for building powerful artificial intelligence (AI) algorithms capable of supporting advancement in cardiac clinical research. However, researchers working with electrocardiogram (ECG) signals struggle to get access and/or to build one. The aim of the present work is to shed light on a potential solution to address the lack of large and easily accessible ECG datasets. Firstly, the main causes of such a lack are identified and examined. Afterward, the potentials and limitations of cardiac data generation via deep generative models (DGMs) are deeply analyzed. These very promising algorithms have been found capable not only of generating large quantities of ECG signals but also of supporting data anonymization processes, to simplify data sharing while respecting patients' privacy. Their application could help research progress and cooperation in the name of open science. However several aspects, such as a standardized synthetic data quality evaluation and algorithm stability, need to be further explored
Analysis and automatic identification of spontaneous emotions in speech from human-human and human-machine communication
383 p.This research mainly focuses on improving our understanding of human-human and human-machineinteractions by analysing paricipants¿ emotional status. For this purpose, we have developed andenhanced Speech Emotion Recognition (SER) systems for both interactions in real-life scenarios,explicitly emphasising the Spanish language. In this framework, we have conducted an in-depth analysisof how humans express emotions using speech when communicating with other persons or machines inactual situations. Thus, we have analysed and studied the way in which emotional information isexpressed in a variety of true-to-life environments, which is a crucial aspect for the development of SERsystems. This study aimed to comprehensively understand the challenge we wanted to address:identifying emotional information on speech using machine learning technologies. Neural networks havebeen demonstrated to be adequate tools for identifying events in speech and language. Most of themaimed to make local comparisons between some specific aspects; thus, the experimental conditions weretailored to each particular analysis. The experiments across different articles (from P1 to P19) are hardlycomparable due to our continuous learning of dealing with the difficult task of identifying emotions inspeech. In order to make a fair comparison, additional unpublished results are presented in the Appendix.These experiments were carried out under identical and rigorous conditions. This general comparisonoffers an overview of the advantages and disadvantages of the different methodologies for the automaticrecognition of emotions in speech
Learning implicit recommenders from massive unobserved feedback
In this thesis we investigate implicit feedback techniques for real-world recommender systems. However, learning a recommender system from implicit feedback is very challenging, primarily due to the lack of negative feedback. While a common strategy is to treat the unobserved feedback (i.e., missing data) as a source of negative signal, the technical difficulties cannot be overlooked: (1) the ratio of positive to negative feedback in practice is highly imbalanced, and (2) learning through all unobserved feedback (which easily scales to billion level or higher) is computationally expensive.
To effectively and efficiently learn recommender models from implicit feedback, two types of methods are presented, that is, negative sampling based stochastic gradient descent (NS-SGD) and whole sample based batch gradient descent (WS-BGD). Regarding the NS-SGD method, how to effectively sample informative negative examples to improve recommendation algorithms is investigated. More specifically, three learning models called Lambda Factorization Machines (lambdaFM), Boosting Factorization Machines (BoostFM) and Geographical Bayesian Personalized Ranking (GeoBPR) are described. While regarding the WS-BGD method, how to efficiently use all unobserved implicit feedback data rather than resorting to negative sampling is studied. A fast BGD learning algorithm is proposed, which can be applied to both basic collaborative filtering and content/context-aware recommendation settings.
The last research work is on the session-based item recommendation, which is also an implicit feedback scenario. However, different from above four works based on shallow embedding models, we apply deep learning based sequence-to-sequence model to directly generate the probability distribution of next item. The proposed generative model can be applied to various sequential recommendation scenarios.
To support the main arguments, extensive experiments are carried out based on real-world recommendation datasets. The proposed recommendation algorithms have achieved significant improvements in contrast with strong benchmark models. Moreover, these models can also serve as generic solutions and solid baselines for future implicit recommendation problems