1,353 research outputs found

    TwitterMancer: Predicting Interactions on Twitter Accurately

    Full text link
    This paper investigates the interplay between different types of user interactions on Twitter, with respect to predicting missing or unseen interactions. For example, given a set of retweet interactions between Twitter users, how accurately can we predict reply interactions? Is it more difficult to predict retweet or quote interactions between a pair of accounts? Also, how important is time locality, and which features of interaction patterns are most important to enable accurate prediction of specific Twitter interactions? Our empirical study of Twitter interactions contributes initial answers to these questions. We have crawled an extensive dataset of Greek-speaking Twitter accounts and their follow, quote, retweet, reply interactions over a period of a month. We find we can accurately predict many interactions of Twitter users. Interestingly, the most predictive features vary with the user profiles, and are not the same across all users. For example, for a pair of users that interact with a large number of other Twitter users, we find that certain "higher-dimensional" triads, i.e., triads that involve multiple types of interactions, are very informative, whereas for less active Twitter users, certain in-degrees and out-degrees play a major role. Finally, we provide various other insights on Twitter user behavior. Our code and data are available at https://github.com/twittermancer/. Keywords: Graph mining, machine learning, social media, social network

    Fidelity-Weighted Learning

    Full text link
    Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.Comment: Published as a conference paper at ICLR 201

    TwitterMancer: predicting interactions on Twitter accurately

    Full text link
    This paper investigates the interplay between different types of user interactions on Twitter, with respect to predicting missing or unseen interactions. For example, given a set of retweet interactions between Twitter users, how accurately can we predict reply interactions? Is it more difficult to predict retweet or quote interactions between a pair of accounts? Also, how important is time locality, and which features of interaction patterns are most important to enable accurate prediction of specific Twitter interactions? Our empirical study of Twitter interactions contributes initial answers to these questions.We have crawled an extensive data set of Greek-speaking Twitter accounts and their follow, quote, retweet, reply interactions over a period of a month. We find we can accurately predict many interactions of Twitter users. Interestingly, the most predictive features vary with the user profiles, and are not the same across all users. For example, for a pair of users that interact with a large number of other Twitter users, we find that certain “higher-dimensional” triads, i.e., triads that involve multiple types of interactions, are very informative, whereas for less active Twitter users, certain in-degrees and out-degrees play a major role. Finally, we provide various other insights on Twitter user behavior. Our code and data are available at https://github.com/twittermancer/.Accepted manuscrip

    VeriSparse: Training Verified Locally Robust Sparse Neural Networks from Scratch

    Full text link
    Several safety-critical applications such as self-navigation, health care, and industrial control systems use embedded systems as their core. Recent advancements in Neural Networks (NNs) in approximating complex functions make them well-suited for these domains. However, the compute-intensive nature of NNs limits their deployment and training in embedded systems with limited computation and storage capacities. Moreover, the adversarial vulnerability of NNs challenges their use in safety-critical scenarios. Hence, developing sparse models having robustness guarantees while leveraging fewer resources during training is critical in expanding NNs' use in safety-critical and resource-constrained embedding system settings. This paper presents 'VeriSparse'-- a framework to search verified locally robust sparse networks starting from a random sparse initialization (i.e., scratch). VeriSparse obtains sparse NNs exhibiting similar or higher verified local robustness, requiring one-third of the training time compared to the state-of-the-art approaches. Furthermore, VeriSparse performs both structured and unstructured sparsification, enabling storage, computing-resource, and computation time reduction during inference generation. Thus, it facilitates the resource-constraint embedding platforms to leverage verified robust NN models, expanding their scope to safety-critical, real-time, and edge applications. We exhaustively investigated VeriSparse's efficacy and generalizability by evaluating various benchmark and application-specific datasets across several model architectures.Comment: 21 pages, 13 tables, 3 figure

    Bi-level Masked Multi-scale CNN-RNN Networks for Short Text Representation

    Full text link
    Representing short text is becoming extremely important for a variety of valuable applications. However, representing short text is critical yet challenging because it involves lots of informal words and typos (i.e. the noise problem) but only a few vocabularies in each text (i.e. the sparsity problem). Most of the existing work on representing short text relies on noise recognition and sparsity expansion. However, the noises in short text are with various forms and changing fast, but, most of the current methods may fail to adaptively recognize the noise. Also, it is hard to explicitly expand a sparse text to a high-quality dense text. In this paper, we tackle the noise and sparsity problems in short text representation by learning multi-grain noise-tolerant patterns and then embedding the most significant patterns in a text as its representation. To achieve this goal, we propose a bi-level multi-scale masked CNN-RNN network to embed the most significant multi-grain noise-tolerant relations among words and characters in a text into a dense vector space. Comprehensive experiments on five large real-world data sets demonstrate our method significantly outperforms the state-of-the-art competitors
    corecore