156,435 research outputs found

    Unsupervised Terminological Ontology Learning based on Hierarchical Topic Modeling

    Full text link
    In this paper, we present hierarchical relationbased latent Dirichlet allocation (hrLDA), a data-driven hierarchical topic model for extracting terminological ontologies from a large number of heterogeneous documents. In contrast to traditional topic models, hrLDA relies on noun phrases instead of unigrams, considers syntax and document structures, and enriches topic hierarchies with topic relations. Through a series of experiments, we demonstrate the superiority of hrLDA over existing topic models, especially for building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the settings of noisy data sets, which are likely to occur in many practical scenarios. Our ontology evaluation results show that ontologies extracted from hrLDA are very competitive with the ontologies created by domain experts

    B\"acklund transformations for fourth Painlev\'e hierarchies

    Get PDF
    B\"acklund transformations (BTs) for ordinary differential equations (ODEs), and in particular for hierarchies of ODEs, are a topic of great current interest. Here we give an improved method of constructing BTs for hierarchies of ODEs. This approach is then applied to fourth Painlev\'e (PIVP_{IV}) hierarchies recently found by the same authors [{\em Publ. Res. Inst. Math. Sci. (Kyoto)} {\bf 37} 327--347 (2001)]. We show how the known pattern of BTs for PIVP_{IV} can be extended to our PIVP_{IV} hierarchies. Remarkably, the BTs required to do this are precisely the Miura maps of the dispersive water wave hierarchy. We also obtain the important result that the fourth Painlev\'e equation has only one nontrivial fundamental BT, and not two such as is frequently stated.Comment: 23 pages, 2 figures, to appear Journal of Differential Equation

    Eliciting Topic Hierarchies from Large Language Models

    Full text link
    Finding topics to write about can be a mentally demanding process. However, topic hierarchies can help writers explore topics of varying levels of specificity. In this paper, we use large language models (LLMs) to help construct topic hierarchies. Although LLMs have access to such knowledge, it can be difficult to elicit due to issues of specificity, scope, and repetition. We designed and tested three different prompting techniques to find one that maximized accuracy. We found that prepending the general topic area to a prompt yielded the most accurate results with 85% accuracy. We discuss applications of this research including STEM writing, education, and content creation.Comment: 4 pages, 4 figure
    • …
    corecore