6 research outputs found

    FAIR Ontologies for transparent and accountable AI: a hospital adverse incidents vocabulary case study

    Get PDF
    In this paper, the relation between the FAIR (Findable, Accessible, Interoperable, Reusable) ontologies and accountability and transparency of ontology-based AI systems is analysed. Also, governance-related gaps in ontology quality evaluation metrics were identified by examining their relation with FAIR principles and FAcct (Fairness, Accountability, Transparency) governance aspects. A simple SKOS vocabulary, titled "Hospital Adverse Incidents Classification Scheme" (HAICS) has been used as a use case for this study. Theoretically, we found that there is a straight relation between FAIR principles and FAccT AI, which means that FAIR ontologies enhance transparency and accountability in ontology-based AI systems. We suggest that "FAIRness" should be assessed as one of the ontology quality evaluation aspects

    The ARK platform: enabling risk management through semantic web technologies

    Get PDF
    This paper describes the Access Risk Knowledge (ARK) platform and ontologies for socio-technical risk analysis using the Cube methodology. Linked Data is used in ARK to integrate qualitative clinical risk management data with quantitative operational data and analytics. This required the development of a novel clinical safety management taxonomy to annotate qualitative risk data and make it more amenable to automated analysis. The platform is complemented by other two ontologies that support structured data capture for the Cube sociotechnical analysis methodology developed by organisational psychologists at Trinity College Dublin. The ARK platform development and trials have shown the benefits of a Semantic Web approach to flexibly support data integration, making qualitative data machine readable and building dynamic, high-usability web applications applied to clinical risk management. The main results so far are a self-annotated, standards-based taxonomy for risk and safety management expressed in the W3C’s standard Simple Knowledge Organisation System (SKOS) and a Cube data capture, curation and analysis platform for clinical risk management domain experts. The paper describes the ontologies and their development process, our initial clinical safety management use case and lessons learned from the application of ARK to real-world use cases. This work has shown the potential for using Linked Data to integrate operational and safety data into a unified information space supporting more continuous, adaptive and predictive clinical risk management

    AccTEF: A transparency and accountability evaluation framework for ontology-based systems

    No full text
    This paper proposes a new accountability and transparency evaluation framework (AccTEF) for ontology-based systems (OSysts). AccTEF is based on an analysis of the relation between a set of widely accepted data governance principles, i.e. findable, accessible, interoperable, reusable (FAIR) and accountability and transparency concepts. The evaluation of accountability and transparency of input ontologies and vocabularies of OSysts are addressed by analyzing the relation between vocabulary and ontology quality evaluation metrics, FAIR and accountability and transparency concepts. An ontology-based knowledge extraction pipeline is used as a use case in this study. Discovering the relation between FAIR and accountability and transparency helps in identifying and mitigating risks associated with deploying OSysts. This also allows providing design guidelines that help accountability and transparency to be embedded in OSysts. We found that FAIR can be used as a transparency indicator. We also found that the studied vocabulary and ontology quality evaluation metrics do not cover FAIR, accountability and transparency. Accordingly, we suggest these concepts should be considered as vocabulary and ontology quality evaluation aspects. To the best of our knowledge, it is the first time that the relation between FAIR and accountability and transparency concepts has been found and used for evaluation

    A risk governance framework for healthcare decision support systems based on socio-technical analysis

    No full text
    We are developing an Artificial Intelligence (AI) risk governance framework based on human factors and AI governance principles to make automated healthcare decision-support safer and more accountable. Today, the healthcare system is facing a huge overload in reporting, which has made manual processing and comprehensive decision-making impossible. Emerging advances in AI and especially Natural Language Processing seem an attractive answer to human limitations in processing high volumes of reports. However, there are known risks to automation, including the risk in change of deploying AI itself into organisations, emotions, and ethics, which are rarely taken into consideration when making AI-based decisions. To explore this, we will first construct a Decision Support System (DSS) tool based on a knowledge graph extracted from real-world healthcare reports. Then, the tool will be deployed in a controlled manner in a hospital and its operation will be analysed using an established socio-technical methodology developed by the Centre for Innovative Human Systems in Trinity College Dublin over 25 years of research. We will contribute by integrating computer science with organizational psychology and the use of human factors methods to identify the impact of AI-based healthcare DSS, their associated risks, and the ethical and legal challenges. We hypothesize that collaborating with the organisational psychologists to consider the global system of human decision-making and AI-based DSS will help in minimizing the AI-based decision-making risk in a way that ensures fairness, accountability, and transparency. This study will be carried out with our partner hospital, St. James in Dublin

    A risk governance framework for healthcare decision support systems based on socio-technical analysis

    No full text
    We are developing an Artificial Intelligence (AI) risk governance framework based on human factors and AI governance principles to make automated healthcare decision-support safer and more accountable. Today, the healthcare system is facing a huge overload in reporting, which has made manual processing and comprehensive decision-making impossible. Emerging advances in AI and especially Natural Language Processing seem an attractive answer to human limitations in processing high volumes of reports. However, there are known risks to automation, including the risk in change of deploying AI itself into organisations, emotions, and ethics, which are rarely taken into consideration when making AI-based decisions. To explore this, we will first construct a Decision Support System (DSS) tool based on a knowledge graph extracted from real-world healthcare reports. Then, the tool will be deployed in a controlled manner in a hospital and its operation will be analysed using an established socio-technical methodology developed by the Centre for Innovative Human Systems in Trinity College Dublin over 25 years of research. We will contribute by integrating computer science with organizational psychology and the use of human factors methods to identify the impact of AI-based healthcare DSS, their associated risks, and the ethical and legal challenges. We hypothesize that collaborating with the organisational psychologists to consider the global system of human decision-making and AI-based DSS will help in minimizing the AI-based decision-making risk in a way that ensures fairness, accountability, and transparency. This study will be carried out with our partner hospital, St. James in Dublin

    The ARK platform: enabling risk management through semantic web technologies

    Get PDF
    This paper describes the Access Risk Knowledge (ARK) platform and ontologies for socio-technical risk analysis using the Cube methodology. Linked Data is used in ARK to integrate qualitative clinical risk management data with quantitative operational data and analytics. This required the development of a novel clinical safety management taxonomy to annotate qualitative risk data and make it more amenable to automated analysis. The platform is complemented by other two ontologies that support structured data capture for the Cube sociotechnical analysis methodology developed by organisational psychologists at Trinity College Dublin. The ARK platform development and trials have shown the benefits of a Semantic Web approach to flexibly support data integration, making qualitative data machine readable and building dynamic, high-usability web applications applied to clinical risk management. The main results so far are a self-annotated, standards-based taxonomy for risk and safety management expressed in the W3C’s standard Simple Knowledge Organisation System (SKOS) and a Cube data capture, curation and analysis platform for clinical risk management domain experts. The paper describes the ontologies and their development process, our initial clinical safety management use case and lessons learned from the application of ARK to real-world use cases. This work has shown the potential for using Linked Data to integrate operational and safety data into a unified information space supporting more continuous, adaptive and predictive clinical risk management
    corecore