2 research outputs found

    Topic modelling of ukraine war-related news using latent dirichlet allocation with collapsed Gibbs sampling

    No full text
    The context of this research is the application of topic modeling to war-related news in the context of the Ukraine war. The objective of the research is to use Latent Dirichlet Allocation (LDA) with Collapsed Gibbs sampling to identify distinct content groups in war-related news. The method used in the research involves data scraping from a Ukrainian news website, data preprocessing, and applying the LDA with Collapsed Gibbs algorithm to infer the latent topics within the corpus. The results of the research include the identification of twelve distinct topics and the corresponding keywords that characterize each topic. The analysis of the results provides insights into the context of each topic, such as discussions on safety measures during wartime, consequences of military actions, and reports on military casualties. The research concludes that the application of LDA with Collapsed Gibbs is a valuable tool for identifying and understanding the context of war-related news. However, there may be discrepancies between the results of the model and human interpretation, which may be due to limitations in the results, model parameters, and the presence of noise data. Future research should focus on optimizing model parameters, filtering noise data, and improving the analysis of topic context to enhance the reliability and interpretability of the results

    Selection of Large Language Model for development of Interactive Chat Bot for SaaS Solutions

    No full text
    International audienceChat Bots play crucial role in modern world businesses. Development of a chatbot is a tedious and complex task that takes enormous amount of time. Therefore, to enable businesses to develop chatbots in short amount of time with small resources, we need to explore usage of LLMs for the chatbot development. In this article the researchers developed a prototype chatbot architecture that enables businesses to use LLMs interchangeably. Nonetheless, it notes the prototype's limitation in tracking conversation history, an area ripe for future enhancement. The current focus on pre-trained LLMs sets the stage for subsequent research into personalized, fine-tuned chatbot experiences for SaaS customers. There are dozens of LLMs existing already, therefore it is crucial to select the most capable LLM which will power the chatbot. It is crucial for the LLM to be cost-efficient, to be profitable for the business. This article evaluates three LLMs, endorsing ChatGPT for its superior speed, cost-effectiveness, and relevance, backed by OpenAI's pioneering status
    corecore