36,544 research outputs found

    Leveraging Node Attributes for Incomplete Relational Data

    Full text link
    Relational data are usually highly incomplete in practice, which inspires us to leverage side information to improve the performance of community detection and link prediction. This paper presents a Bayesian probabilistic approach that incorporates various kinds of node attributes encoded in binary form in relational models with Poisson likelihood. Our method works flexibly with both directed and undirected relational networks. The inference can be done by efficient Gibbs sampling which leverages sparsity of both networks and node attributes. Extensive experiments show that our models achieve the state-of-the-art link prediction results, especially with highly incomplete relational data.Comment: Appearing in ICML 201

    On Consistency of Graph-based Semi-supervised Learning

    Full text link
    Graph-based semi-supervised learning is one of the most popular methods in machine learning. Some of its theoretical properties such as bounds for the generalization error and the convergence of the graph Laplacian regularizer have been studied in computer science and statistics literatures. However, a fundamental statistical property, the consistency of the estimator from this method has not been proved. In this article, we study the consistency problem under a non-parametric framework. We prove the consistency of graph-based learning in the case that the estimated scores are enforced to be equal to the observed responses for the labeled data. The sample sizes of both labeled and unlabeled data are allowed to grow in this result. When the estimated scores are not required to be equal to the observed responses, a tuning parameter is used to balance the loss function and the graph Laplacian regularizer. We give a counterexample demonstrating that the estimator for this case can be inconsistent. The theoretical findings are supported by numerical studies.Comment: This paper is accepted by 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS

    Analytical considerations of flow boiling heat transfer in metal-foam filled tubes

    Get PDF
    Flow boiling in metal-foam filled tube was analytically investigated based on a modified microstructure model, an original boiling heat transfer model and fin analysis for metal foams. Microstructure model of metal foams was established, by which fiber diameter and surface area density were precisely predicted. The heat transfer model for flow boiling in metal foams was based on annular pattern, in which two phase fluid was composed by vapor region in the center of the tube and liquid region near the wall. However, it was assumed that nucleate boiling performed only in the liquid region. Fin analysis and heat transfer network for metal foams were integrated to obtain the convective heat transfer coefficient at interface. The analytical solution was verified by its good agreement with experimental data. The parametric study on heat transfer coefficient and boiling mechanism was also carried out.Peer reviewedFinal Accepted Versio

    Dirichlet belief networks for topic structure learning

    Full text link
    Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.Comment: accepted in NIPS 201
    • …
    corecore