5 research outputs found

    Towards a Taxonomy of AI Risks in the Health Domain

    Get PDF
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people's lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order to protect the fundamental interests and rights of those affected. This will increase the level to which these systems become ethically acceptable, legally permissible, and socially sustainable. In this paper, we first discuss the necessity of AI risk management in the health domain from the ethical, legal, and societal perspectives. We then present a taxonomy of risks associated with the use of AI systems in the health domain called HART, accessible online at https://w3id.org/hart. HART mirrors the risks of a variety of different real-world incidents caused by use of AI in the health sector. Lastly, we discuss the implications of the taxonomy for different stakeholder groups and further research.This project is the result of interdisciplinary research within the PROTECT (Protecting Personal Data Amidst Big Data Innovation) project and has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie grant agreement No 813497

    To be high-risk, or not to be - semantic specifications and implications of the AI act’s high-risk AI applications and harmonised standards

    Get PDF
    The EU’s proposed AI Act sets out a risk-based regulatory framework to govern the potential harms emanating from use of AI systems. Within the AI Act’s hierarchy of risks, the AI systems that are likely to incur “high-risk” to health, safety, and fundamental rights are subject to the majority of the Act’s provisions. To include uses of AI where fundamental rights are at stake, Annex III of the Act provides a list of applications wherein the conditions that shape high-risk AI are described. For high-risk AI systems, the AI Act places obligations on providers and users regarding use of AI systems and keeping appropriate documentation through the use of harmonised standards. In this paper, we analyse the clauses defining the criteria for high-risk AI in Annex III to simplify identification of potential high-risk uses of AI by making explicit the “core concepts” whose combination makes them high-risk. We use these core concepts to develop an open vocabulary for AI risks (VAIR) to represent and assist with AI risk assessments in a form that supports automation and integration. VAIR is intended to assist with identification and documentation of risks by providing a common vocabulary that facilitates knowledge sharing and interoperability between actors in the AI value chain. Given that the AI Act relies on harmonised standards for much of its compliance and enforcement regarding high-risk AI systems, we explore the implications of current international standardisation activities undertaken by ISO and emphasise the necessity of better risk and impact knowledge bases such as VAIR that can be integrated with audits and investigations to simplify the AI Act’s application

    Comparison and analysis of 3 Key AI documents: EU’s proposed AI Act, assessment list for trustworthy AI (ALTAI), and ISO/IEC 42001 AI management system

    Get PDF
    Conforming to multiple and sometimes conflicting guidelines, standards, and legislations regarding development, deployment, and governance of AI is a serious challenge for organisations. While the AI standards and regulations are both in early stages of development, it is prudent to avoid a highly-fragmented landscape and market confusion by finding out the gaps and resolving the potential conflicts. This paper provides an initial comparison of ISO/IEC 42001 AI management sys- tem standard with the EU trustworthy AI assessment list (ALTAI) and the proposed AI Act using an upper-level ontology for semantic interop- erability between trustworthy AI documents with a focus on activities. The comparison is provided as an RDF resource graph to enable further enhancement and reuse in an extensible and interoperable manner

    The Effect of glial cells inhibition on the progression of seizures induced by chemical kindling in male rats

    Get PDF
    Background &amp; Objectives: Considering the role of glial cells in synaptic transmission, regulation of neurotransmitter concentration in synaptic cleft, K+ buffering, and releasing the gliotransmitters, the purpose of this study is to investigate the effect of glial cells inhibition on the progression of seizures induced by chemical kindling in rats. Materials &amp; Methods: In chemical kindling, animals received Pentylenetetrazol, 35 mg/kg each 48 hours, intraperitoneally and five different stages of seizure were appeared gradually and seizure parameters including maximum seizure stage (SS), stage 4 latency (S4L), stage 4&amp;5 duration (S5D), and seizure duration (SD) were measured during 20 min after PTZ injection. Then seizure parameters were evaluated in animals treated with intracerebroventricular (icv) administration of Fluorocitrate (as a glial cells inhibitor), injected 30 min before PTZ, and compared with PTZ treated animals. Results: Results showed that glial cells inhibition with ICV injection of Fluorocitrate decreased SS, S5D, and SD and increased S4L significantly (P<0.05, P<0.01, P<0.001). Â Conclusion: On the basis of obtained results, it may be concluded that glial cells inhibition reduces spreading rate of epileptiform activity in the nervous system, and the duration of neuronal hyperexcitability and, also, prevents the progression of seizure to final stages

    Data Privacy Vocabulary (DPV) - Version 2

    No full text
    The Data Privacy Vocabulary (DPV), developed by the W3C Data Privacy Vocabularies and Controls Community Group (DPVCG), enables the creation of machine-readable, interoperable, and standards-based representations for describing the processing of personal data. The group has also published extensions to the DPV to describe specific applications to support legislative requirements such as the EU's GDPR. The DPV fills a crucial niche in the state of the art by providing a vocabulary that can be embedded and used alongside other existing standards such as W3C ODRL, and which can be customised and extended for adapting to specifics of use-cases or domains. This article describes the version 2 iteration of the DPV in terms of its contents, methodology, current adoptions and uses, and future potential. It also describes the relevance and role of DPV in acting as a common vocabulary to support various regulatory (e.g. EU's DGA and AI Act) and community initiatives (e.g. Solid) emerging across the globe
    corecore