4 research outputs found

    Generative adversarial deep learning in images using Nash equilibrium game theory

    Get PDF
    A generative adversarial learning (GAL) algorithm is presented to overcome the manipulations that take place in adversarial data and to result in a secured convolutional neural network (CNN). The main objective of the generative algorithm is to make some changes to initial data with positive and negative class labels in testing, hence the CNN results in misclassified data. An adversarial algorithm is used to manipulate the input data that represents the boundaries of learner’s decision-making process. The algorithm generates adversarial modifications to the test dataset using a multiplayer stochastic game approach, without learning how to manipulate the data during training. Then the manipulated data is passed through a CNN for evaluation. The multi-player game consists of an interaction between adversaries which generates manipulations and retrains the model by the learner. The Nash equilibrium game theory (NEGT) is applied to Canadian Institute for Advance Research (CIFAR) dataset. This was done to produce a secure CNN output that is more robust to adversarial data manipulations. The experimental results show that proposed NEGT-GAL achieved a grater mean value of 7.92 and takes less wall clock time of 25,243 sec. Therefore, the proposed NEGT-GAL outperforms the compared existing methods and achieves greater performance

    Credit Risk Modeling without Sensitive Features: An Adversarial Deep Learning Model for Fairness and Profit

    Get PDF
    We propose an adversarial deep learning model for credit risk modeling. We make use of sophisticated machine learning model’s ability to triangulate (i.e., infer the sensitive group affiliation by using only permissible features), which is often deemed “troublesome” in fair machine learning research, in a positive way to increase both borrower welfare and lender profits while improving fairness. We train and test our model on a dataset from a real-world microloan company. Our model significantly outperforms regular deep neural networks without adversaries and the most popular credit risk model XGBoost, in terms of both improving borrowers’ welfare and lenders’ profits. Our empirical findings also suggest that the traditional AUC metric cannot reflect a model\u27s performance on the borrowers’ welfare and lenders’ profits. Our framework is ready to be customized for other microloan firms, and can be easily adapted to many other decision-making scenarios

    On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training

    Full text link
    Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind the social media texts or reviews, which has been a fundamental application to the real-world society. Since the early 2010s, ABSA has achieved extraordinarily high accuracy with various deep neural models. However, existing ABSA models with strong in-house performances may fail to generalize to some challenging cases where the contexts are variable, i.e., low robustness to real-world environments. In this study, we propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training. First, we strengthen the current best-robust syntax-aware models by further incorporating the rich external syntactic dependencies and the labels with aspect simultaneously with a universal-syntax graph convolutional network. In the corpus perspective, we propose to automatically induce high-quality synthetic training data with various types, allowing models to learn sufficient inductive bias for better robustness. Last, we based on the rich pseudo data perform adversarial training to enhance the resistance to the context perturbation and meanwhile employ contrastive learning to reinforce the representations of instances with contrastive sentiments. Extensive robustness evaluations are conducted. The results demonstrate that our enhanced syntax-aware model achieves better robustness performances than all the state-of-the-art baselines. By additionally incorporating our synthetic corpus, the robust testing results are pushed with around 10% accuracy, which are then further improved by installing the advanced training strategies. In-depth analyses are presented for revealing the factors influencing the ABSA robustness.Comment: Accepted in ACM Transactions on Information System

    Adversarial deep learning models with multiple adversaries

    Full text link
    © 1989-2012 IEEE. We develop an adversarial learning algorithm for supervised classification in general and Convolutional Neural Networks (CNN) in particular. The algorithm's objective is to produce small changes to the data distribution defined over positive and negative class labels so that the resulting data distribution is misclassified by the CNN. The theoretical goal is to determine a manipulating change on the input data that finds learner decision boundaries where many positive labels become negative labels. Then we propose a CNN which is secure against such unforeseen changes in data. The algorithm generates adversarial manipulations by formulating a multiplayer stochastic game targeting the classification performance of the CNN. The multiplayer stochastic game is expressed in terms of multiple two-player sequential games. Each game consists of interactions between two players-an intelligent adversary and the learner CNN-such that a player's payoff function increases with interactions. Following the convergence of a sequential noncooperative Stackelberg game, each two-player game is solved for the Nash equilibrium. The Nash equilibrium finds a pair of strategies (learner weights and evolutionary operations) from which there is no incentive for either learner or adversary to deviate. We then retrain the learner over all the adversarial manipulations generated by multiple players to propose a secure CNN which is robust to subsequent adversarial data manipulations. The adversarial data and corresponding CNN performance is evaluated on MNIST handwritten digits data. The results suggest that game theory and evolutionary algorithms are very effective in securing deep learning models against performance vulnerabilities simulated as attack scenarios from multiple adversaries
    corecore