55 research outputs found

    AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding

    Full text link
    Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a "one-size-fits-all" approach when concerning multi-scale structure information, such as first- and second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on real-world networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-of-the-art approaches for network embedding.Comment: 8 pages, 5 figure

    Privacy Protection and Utility Trade-Off for Social Graph Embedding

    Get PDF
    In graph embedding protection, deleting the embedding vector of a node does not completelydisrupt its structural relationships. The embedding model must be retrained over the networkwithout sensitive nodes, which incurs a waste of computation and offers no protection forordinary users. Meanwhile, the edge perturbations do not guarantee good utility. This workproposed a new privacy protection and utility trade-off method without retraining. Firstly, sinceembedding distance reflects the closeness of nodes, we label and group user nodes into sensitive,near-sensitive, and ordinary regions to perform different strengths of privacy protection. Thenear-sensitive region can reduce the leaking risk of neighboring nodes connecting to sensitivenodes without sacrificing all of their utility. Secondly, we use mutual information to measureprivacy and utility while adapting a single model-based mutual information neural estimatorto vector pairs to reduce modeling and computational complexity. Thirdly, by keeping addingdifferent noise to the divided regions and reestimating the mutual information between theoriginal and noise-perturbed embeddings, our framework achieves a good trade-off betweenprivacy and utility. Simulation results show that the proposed framework is superior to state-of-the-art baselines like LPPGE and DPNE

    A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models

    Full text link
    With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in a more general and flexible sense -- we demand to attack various kinds of graph embedding model with black-box driven. To this end, we begin by investigating the theoretical connections between graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter. As such, a generalized adversarial attacker: GF-Attack is constructed by the graph filter and feature matrix. Instead of accessing any knowledge of the target classifiers used in graph embedding, GF-Attack performs the attack only on the graph filter in a black-box attack fashion. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one-edge flip is able to consistently make a strong attack in performance to different graph embedding models.Comment: Accepted by the AAAI 202

    Greedy PIG: Adaptive Integrated Gradients

    Full text link
    Deep learning has become the standard approach for most machine learning tasks. While its impact is undeniable, interpreting the predictions of deep learning models from a human perspective remains a challenge. In contrast to model training, model interpretability is harder to quantify and pose as an explicit optimization problem. Inspired by the AUC softmax information curve (AUC SIC) metric for evaluating feature attribution methods, we propose a unified discrete optimization framework for feature attribution and feature selection based on subset selection. This leads to a natural adaptive generalization of the path integrated gradients (PIG) method for feature attribution, which we call Greedy PIG. We demonstrate the success of Greedy PIG on a wide variety of tasks, including image feature attribution, graph compression/explanation, and post-hoc feature selection on tabular data. Our results show that introducing adaptivity is a powerful and versatile method for making attribution methods more powerful
    • …
    corecore