273 research outputs found

    Deep Learning for Link Prediction in Dynamic Networks using Weak Estimators

    Full text link
    Link prediction is the task of evaluating the probability that an edge exists in a network, and it has useful applications in many domains. Traditional approaches rely on measuring the similarity between two nodes in a static context. Recent research has focused on extending link prediction to a dynamic setting, predicting the creation and destruction of links in networks that evolve over time. Though a difficult task, the employment of deep learning techniques have shown to make notable improvements to the accuracy of predictions. To this end, we propose the novel application of weak estimators in addition to the utilization of traditional similarity metrics to inexpensively build an effective feature vector for a deep neural network. Weak estimators have been used in a variety of machine learning algorithms to improve model accuracy, owing to their capacity to estimate changing probabilities in dynamic systems. Experiments indicate that our approach results in increased prediction accuracy on several real-world dynamic networks

    The Rho GTPases Rac1, Cdc42, and RhoA Regulate APP Transport to Lysosomes and Aβ Production

    Get PDF
    Alzheimer’s Disease (AD) is characterized by Beta-Amyloid (Aβ) plaques within the brain. Aβ peptides are produced by the cleavage of Amyloid Precursor Protein (APP). Our lab has previously discovered a novel pathway for APP internalization mediated by ADP-ribosylation factor 6 (Arf6). This pathway resembles macropinocytosis, transporting cell surface APP directly to lysosomes, a possible site for Aβ production. We set out to characterize the effectors downstream of Arf6. In SN56 and N2A cells we co-transfected HA-tagged APP (to label cell-surface APP) with compartment markers, to visualize APP trafficking. We used dominant negative and constitutively active mutants, pharmacological inhibitors, and siRNA for Rac1, Cdc42, and RhoA to determine their roles in APP macropinocytosis. APP trafficking to lysosomes was reduced after knockdown of Rac1, Cdc42, and RhoA, and inhibition of this transport reduced production of Aβ40 and Aβ42. Our findings indicate a role for Rac1, Cdc42, and RhoA in Aβ production

    Asking More Informative Questions for Grounded Retrieval

    Full text link
    When a model is trying to gather information in an interactive setting, it benefits from asking informative questions. However, in the case of a grounded multi-turn image identification task, previous studies have been constrained to polar yes/no questions, limiting how much information the model can gain in a single turn. We present an approach that formulates more informative, open-ended questions. In doing so, we discover that off-the-shelf visual question answering (VQA) models often make presupposition errors, which standard information gain question selection methods fail to account for. To address this issue, we propose a method that can incorporate presupposition handling into both question selection and belief updates. Specifically, we use a two-stage process, where the model first filters out images which are irrelevant to a given question, then updates its beliefs about which image the user intends. Through self-play and human evaluations, we show that our method is successful in asking informative open-ended questions, increasing accuracy over the past state-of-the-art by 14%, while resulting in 48% more efficient games in human evaluations

    Zero redundancy distributed learning with differential privacy

    Full text link
    Deep learning using large models have achieved great success in a wide range of domains. However, training these models on billions of parameters is very challenging in terms of the training speed, memory cost, and communication efficiency, especially under the privacy-preserving regime with differential privacy (DP). On the one hand, DP optimization has comparable efficiency to the standard non-private optimization on a single GPU, but on multiple GPUs, existing DP distributed learning (such as pipeline parallel) has suffered from significantly worse efficiency. On the other hand, the Zero Redundancy Optimizer (ZeRO) is a state-of-the-art solution to the standard distributed learning, exhibiting excellent training efficiency on large models, but to work compatibly with DP is technically complicated. In this work, we develop a new systematic solution, DP-ZeRO, (I) to scale up the trainable DP model size, e.g. to GPT-100B, (II) to obtain the same computation and communication efficiency as the standard ZeRO, and (III) to enable mixed-precision DP training. Our DP-ZeRO, like the standard ZeRO, has the potential to train models with arbitrary size and is evaluated on the world's largest DP models in terms of the number of trainable parameters
    • …
    corecore