12 research outputs found

    Using Hadoop to implement a semantic method of assessing the quality of research medical datasets.

    Get PDF
    In this paper a system for storing and querying medical RDF data using Hadoop is developed. This approach enables us to create an inherently parallel framework that will scale the workload across a cluster. Unlike existing solutions, our framework uses highly optimised joining strategies to enable the completion of eight separate SPAQL queries, comprised of over eighty distinct joins, in only two Map/Reduce iterations. Results are presented comparing an optimised version of our solution against Jena TDB, demonstrating the superior performance of our system and its viability for assessing the quality of medical data

    Using Hadoop To Implement a Semantic Method Of Assessing The Quality Of Research Medical Datasets

    Get PDF
    In this paper a system for storing and querying medical RDF data using Hadoop is developed. This approach enables us to create an inherently parallel framework that will scale the workload across a cluster. Unlike existing solutions, our framework uses highly optimised joining strategies to enable the completion of eight separate SPAQL queries, comprised of over eighty distinct joins, in only two Map/Reduce iterations. Results are presented comparing an optimised version of our solution against Jena TDB, demonstrating the superior performance of our system and its viability for assessing the quality of medical data

    Model Checking Using Large Language Models—Evaluation and Future Directions

    Get PDF
    Large language models (LLMs) such as ChatGPT have risen in prominence recently, leading to the need to analyze their strengths and limitations for various tasks. The objective of this work was to evaluate the performance of large language models for model checking, which is used extensively in various critical tasks such as software and hardware verification. A set of problems were proposed as a benchmark in this work and three LLMs (GPT-4, Claude, and Gemini) were evaluated with respect to their ability to solve these problems. The evaluation was conducted by comparing the responses of the three LLMs with the gold standard provided by model checking tools. The results illustrate the limitations of LLMs in these tasks, identifying directions for future research. Specifically, the best overall performance (ratio of problems solved correctly) was 60%, indicating a high probability of reasoning errors by the LLMs, especially when dealing with more complex scenarios requiring many reasoning steps, and the LLMs typically performed better when generating scripts for solving the problems rather than solving them directly

    Enabling the use of a planning agent for urban traffic management via enriched and integrated urban data

    Get PDF
    Improving a city’s infrastructure is seen as a crucial part of its sustainability, leading to efficiencies and opportunities driven by technology integration. One significant step is to support the integration and enrichment of a broad variety of data, often using state of the art linked data approaches. Among the many advantages of such enrichment is that this may enable the use of intelligent processes to autonomously manage urban facilities such as traffic signal controls. In this paper we document an attempt to integrate sets of sensor and historical data using a data hub and a set of ontologies for the data. We argue that access to such high level integrated data sources leads to the enhancement of the capabilities of an urban transport operator. We demonstrate this by documenting the development of a planning agent which uses such data as inputs in the form of logic statements, and when given traffic goals to achieve, outputs complex traffic signal strategies which help transport operators deal with exceptional events such as road closures or road traffic saturation. The aim is to create an autonomous agent which reacts to commands from transport operators in the face of exceptional events involving saturated roads, and creates, executes and monitors plans to deal with the effects of such events. We evaluate the intelligent agent in a region of a large urban area, under the direction of urban transport operators

    Data Platforms: Interoperability and Insight

    No full text

    Data-Driven Decision Support for Adult Autism Diagnosis Using Machine Learning

    No full text
    Adult referrals to specialist autism spectrum disorder diagnostic services have increased in recent years, placing strain on existing services and illustrating the need for the development of a reliable screening tool, in order to identify and prioritize patients most likely to receive an ASD diagnosis. In this work a detailed overview of existing approaches is presented and a data driven analysis using machine learning is applied on a dataset of adult autism cases consisting of 192 cases. Our results show initial promise, achieving total positive rate (i.e., correctly classified instances to all instances ratio) up to 88.5%, but also point to limitations of currently available data, opening up avenues for further research. The main direction of this research is the development of a novel autism screening tool for adults (ASTA) also introduced in this work and preliminary results indicate the ASTA is suitable for use as a screening tool for adult populations in clinical settings

    Optimizing a semantically enriched hypercat-enabled internet of things data hub (Short paper)

    No full text
    Large volumes of data is generated from the increasing num-ber of sensor networks and smart devices. Such data is generated and published in multiple formats, thus highlighting the significance of inter-operability for the success of what has come to be known as the Internet of Things (IoT). The BT Hypercat Data Hub provides a focal point for the sharing and consumption of available datasets from a wide range of sources. In this work, we present a series of optimizations applied on the BT Hypercat Data Hub that enabled scalable SPARQL query answering over relational databases and an access control mechanism that filters SPARQL results based on user's subscriptions.Full Tex

    Taking stock of available technologies for compliance checking on first-order knowledge

    No full text
    This paper analyses and compares some of the automated reasoners that have been used in recent research for compliance checking. We are interested here in formalizations at the first-order level. Past literature on normative reasoning mostly focuses on the propositional level. However, the propositional level is of little usefulness for concrete LegalTech applications, in which compliance checking must be enforced on (large) sets of individuals. This paper formalizes a selected use case in the considered reasoners and compares the implementations. The comparison will highlight that lot of further research still need to be done to integrate the benefits featured by the different reasoners into a single standardized first-order framework. All source codes are available at https://github.com/liviorobaldo/compliancecheckers.Full Tex

    Taking stock of available technologies for compliance checking on first-order knowledge

    No full text
    This paper analyses and compares some of the automated reasoners that have been used in recent research for compliance checking. We are interested here in formalizations at the first-order level. Past literature on normative reasoning mostly focuses on the propositional level. However, the propositional level is of little usefulness for concrete LegalTech applications, in which compliance checking must be enforced on (large) sets of individuals. This paper formalizes a selected use case in the considered reasoners and compares the implementations. The comparison will highlight that lot of further research still need to be done to integrate the benefits featured by the different reasoners into a single standardized first-order framework. All source codes are available at https://github.com/liviorobaldo/compliancecheckers
    corecore