3 research outputs found

    Large Scale Qualitative Spatio-Temporal Reasoning

    Get PDF
    This thesis considers qualitative spatio-temporal reasoning (QSTR), a branch of artificial intelligence that is concerned with qualitative spatial and temporal relations between entities. Despite QSTR being an active area of research for many years, there has been comparatively little work looking at large scale qualitative spatio-temporal reasoning - reasoning using hundreds of thousands or millions of relations. The big data phenomenon of recent years means there is now a requirement for QSTR implementations that will scale effectively and reason using large scale datasets. However, existing reasoners are limited in their scalability, what is needed are new approaches to QSTR. This thesis considers whether parallel distributed programming techniques can be used to address the challenges of large scale QSTR. Specifically, this thesis presents the first in-depth investigation of adapting QSTR techniques to work in a distributed environment. This has resulted in a large scale qualitative spatial reasoner, ParQR, which has been evaluated by comparing it with existing reasoners and alternative approaches to large scale QSTR. ParQR has been shown to outperform existing solutions, reasoning using far larger datasets than previously possible. The thesis then considers a specific application of large scale QSTR, querying knowledge graphs. This has two parts to it. First, integrating large scale complex spatial datasets to generate an enhanced knowledge graph that can support qualitative spatial reasoning, and secondly, adapting parallel, distributed QSTR techniques to implement a query answering system for spatial knowledge graphs. The query engine that has been developed is able to provide solutions to a variety of spatial queries. It has been evaluated and shown to provide more comprehensive query results in comparison to using quantitative only techniques

    Adapting to Change: The Temporal Persistence of Text Classifiers in the Context of Longitudinally Evolving Data

    Get PDF
    This thesis delves into the evolving landscape of NLP, particularly focusing on the temporal persistence of text classifiers amid the dynamic nature of language use. The primary objective is to understand how changes in language patterns over time impact the performance of text classification models and to develop methodologies for maintaining their effectiveness. The research begins by establishing a theoretical foundation for text classification and temporal data analysis, highlighting the challenges posed by the evolving use of language and its implications for NLP models. A detailed exploration of various datasets, including the stance detection and sentiment analysis datasets, sets the stage for examining these dynamics. The characteristics of the datasets, such as linguistic variations and temporal vocabulary growth, are carefully examined to understand their influence on the performance of the text classifier. A series of experiments are conducted to evaluate the performance of text classifiers across different temporal scenarios. The findings reveal a general trend of performance degradation over time, emphasizing the need for classifiers that can adapt to linguistic changes. The experiments assess models' ability to estimate past and future performance based on their current efficacy and linguistic dataset characteristics, leading to valuable insights into the factors influencing model longevity. Innovative solutions are proposed to address the observed performance decline and adapt to temporal changes in language use over time. These include incorporating temporal information into word embeddings and comparing various methods across temporal gaps. The Incremental Temporal Alignment (ITA) method emerges as a significant contributor to enhancing classifier performance in same-period experiments, although it faces challenges in maintaining effectiveness over longer temporal gaps. Furthermore, the exploration of machine learning and statistical methods highlights their potential to maintain classifier accuracy in the face of longitudinally evolving data. The thesis culminates in a shared task evaluation, where participant-submitted models are compared against baseline models to assess their classifiers' temporal persistence. This comparison provides a comprehensive understanding of the short-term, long-term, and overall persistence of their models, providing valuable information to the field. The research identifies several future directions, including interdisciplinary approaches that integrate linguistics and sociology, tracking textual shifts on online platforms, extending the analysis to other classification tasks, and investigating the ethical implications of evolving language in NLP applications. This thesis contributes to the NLP field by highlighting the importance of evaluating text classifiers' temporal persistence and offering methodologies to enhance their sustainability in dynamically evolving language environments. The findings and proposed approaches pave the way for future research, aiming at the development of more robust, reliable, and temporally persistent text classification models
    corecore