6 research outputs found

    Deep Learning in Social Networks for Overlappering Community Detection

    Get PDF
    The collection of nodes is termed as community in any network system that are tightly associated to the other nodes. In network investigation, identifying the community structure is crucial task, particularly for exposing connections between certain nodes. For community overlapping, network discovery, there are numerous methodologies described in the literature. Numerous scholars have recently focused on network embedding and feature learning techniques for node clustering. These techniques translate the network into a representation space with fewer dimensions. In this paper, a deep neural network-based model for learning graph representation and stacked auto-encoders are given a nonlinear embedding of the original graph to learn the model. In order to extract overlapping communities, an AEOCDSN algorithm is used. The efficiency of the suggested model is examined through experiments on real-world datasets of various sizes and accepted standards. The method outperforms various well-known community detection techniques, according to empirical findings

    Learning Binary Decision Trees by Argmin Differentiation

    Get PDF
    We address the problem of learning binary decision trees that partition data for some downstream task. We propose to learn discrete parameters (i.e., for tree traversals and node pruning) and continuous parameters (i.e., for tree split functions and prediction functions) simultaneously using argmin differentiation. We do so by sparsely relaxing a mixed-integer program for the discrete parameters, to allow gradients to pass through the program to continuous parameters. We derive customized algorithms to efficiently compute the forward and backward passes. This means that our tree learning procedure can be used as an (implicit) layer in arbitrary deep networks, and can be optimized with arbitrary loss functions. We demonstrate that our approach produces binary trees that are competitive with existing single tree and ensemble approaches, in both supervised and unsupervised settings. Further, apart from greedy approaches (which do not have competitive accuracies), our method is faster to train than all other tree-learning baselines we compare with. The code for reproducing the results is available at https://github.com/vzantedeschi/LatentTrees

    Robust Large-Margin Learning in Hyperbolic Space

    Full text link
    Recently, there has been a surge of interest in representation learning in hyperbolic spaces, driven by their ability to represent hierarchical data with significantly fewer dimensions than standard Euclidean spaces. However, the viability and benefits of hyperbolic spaces for downstream machine learning tasks have received less attention. In this paper, we present, to our knowledge, the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space. Specifically, we consider the problem of learning a large-margin classifier for data possessing a hierarchical structure. Our first contribution is a hyperbolic perceptron algorithm, which provably converges to a separating hyperplane. We then provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples. Finally, we prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees when learning the classifier directly in hyperbolic space.Comment: Accepted to NeurIPS 202
    corecore