6 research outputs found

    PGLDA: enhancing the precision of topic modelling using poisson gamma (PG) and latent dirichlet allocation (LDA) for text information retrieval

    Get PDF
    The Poisson document length distribution has been used extensively in the past for modeling topics with the expectation that its effect will disintegrate at the end of the model definition. This procedure often leads to down Playing word correlation with topics and reducing retrieved documents' precision or accuracy. The existing document model, such as the Latent Dirichlet Allocation (LDA) model, does not accommodate words' semantic representation. Therefore, in this thesis, the PoissonGamma Latent Dirichlet Allocation (PGLDA) model for modeling word dependencies in topic modeling is introduced. The PGLDA model relaxes the words independence assumption in the existing Latent Dirichlet Allocation (LDA) model by introducing the Gamma distribution that captures the correlation between adjacent words in documents. The PGLDA is hybridized with the distributed representation of documents (Doc2Vec) and topics (Topic2Vec) to form a new model named PGLDA2Vec. The hybridization process was achieved by averaging the Doc2Vec and Topic2Vec vectors to form new word representation vectors, combined with topics with the largest estimated probability using PGLDA. Model estimations for PGLDA and PGLDA2Vec models were achieved by combining the Laplacian approximation of log-likelihood for PGLDA and Feed-Forward Neural Network (FFN) approaches of Doc2Vec and Topic2Vec. The proposed PGLDA and the hybrid PGLDA2Vec models were assessed using precision, micro F1 scores, perplexity, and coherence score. The empirical analysis results using three real-world datasets (20 Newsgroups, AG'News, and Reuters) showed that the hybrid PGLDA2Vec model with an average precision of 86.6%, and an average F1 score of 96.3%, across the three datasets is better than other competing models reviewed

    Graphical models beyond standard settings: lifted decimation, labeling, and counting

    Get PDF
    With increasing complexity and growing problem sizes in AI and Machine Learning, inference and learning are still major issues in Probabilistic Graphical Models (PGMs). On the other hand, many problems are specified in such a way that symmetries arise from the underlying model structure. Exploiting these symmetries during inference, which is referred to as "lifted inference", has lead to significant efficiency gains. This thesis provides several enhanced versions of known algorithms that show to be liftable too and thereby applies lifting in "non-standard" settings. By doing so, the understanding of the applicability of lifted inference and lifting in general is extended. Among various other experiments, it is shown how lifted inference in combination with an innovative Web-based data harvesting pipeline is used to label author-paper-pairs with geographic information in online bibliographies. This results is a large-scale transnational bibliography containing affiliation information over time for roughly one million authors. Analyzing this dataset reveals the importance of understanding count data. Although counting is done literally everywhere, mainstream PGMs have widely been neglecting count data. In the case where the ranges of the random variables are defined over the natural numbers, crude approximations to the true distribution are often made by discretization or a Gaussian assumption. To handle count data, Poisson Dependency Networks (PDNs) are introduced which presents a new class of non-standard PGMs naturally handling count data
    corecore