479 research outputs found

    A clinical decision support system for detecting and mitigating potentially inappropriate medications

    Get PDF
    Background: Medication errors are a leading cause of preventable harm to patients. In older adults, the impact of ageing on the therapeutic effectiveness and safety of drugs is a significant concern, especially for those over 65. Consequently, certain medications called Potentially Inappropriate Medications (PIMs) can be dangerous in the elderly and should be avoided. Tackling PIMs by health professionals and patients can be time-consuming and error-prone, as the criteria underlying the definition of PIMs are complex and subject to frequent updates. Moreover, the criteria are not available in a representation that health systems can interpret and reason with directly. Objectives: This thesis aims to demonstrate the feasibility of using an ontology/rule-based approach in a clinical knowledge base to identify potentially inappropriate medication(PIM). In addition, how constraint solvers can be used effectively to suggest alternative medications and administration schedules to solve or minimise PIM undesirable side effects. Methodology: To address these objectives, we propose a novel integrated approach using formal rules to represent the PIMs criteria and inference engines to perform the reasoning presented in the context of a Clinical Decision Support System (CDSS). The approach aims to detect, solve, or minimise undesirable side-effects of PIMs through an ontology (knowledge base) and inference engines incorporating multiple reasoning approaches. Contributions: The main contribution lies in the framework to formalise PIMs, including the steps required to define guideline requisites to create inference rules to detect and propose alternative drugs to inappropriate medications. No formalisation of the selected guideline (Beers Criteria) can be found in the literature, and hence, this thesis provides a novel ontology for it. Moreover, our process of minimising undesirable side effects offers a novel approach that enhances and optimises the drug rescheduling process, providing a more accurate way to minimise the effect of drug interactions in clinical practice

    Don't Treat the Symptom, Find the Cause! Efficient Artificial-Intelligence Methods for (Interactive) Debugging

    Full text link
    In the modern world, we are permanently using, leveraging, interacting with, and relying upon systems of ever higher sophistication, ranging from our cars, recommender systems in e-commerce, and networks when we go online, to integrated circuits when using our PCs and smartphones, the power grid to ensure our energy supply, security-critical software when accessing our bank accounts, and spreadsheets for financial planning and decision making. The complexity of these systems coupled with our high dependency on them implies both a non-negligible likelihood of system failures, and a high potential that such failures have significant negative effects on our everyday life. For that reason, it is a vital requirement to keep the harm of emerging failures to a minimum, which means minimizing the system downtime as well as the cost of system repair. This is where model-based diagnosis comes into play. Model-based diagnosis is a principled, domain-independent approach that can be generally applied to troubleshoot systems of a wide variety of types, including all the ones mentioned above, and many more. It exploits and orchestrates i.a. techniques for knowledge representation, automated reasoning, heuristic problem solving, intelligent search, optimization, stochastics, statistics, decision making under uncertainty, machine learning, as well as calculus, combinatorics and set theory to detect, localize, and fix faults in abnormally behaving systems. In this thesis, we will give an introduction to the topic of model-based diagnosis, point out the major challenges in the field, and discuss a selection of approaches from our research addressing these issues.Comment: Habilitation Thesi

    Natural Language Reasoning on ALC knowledge bases using Large Language Models

    Get PDF
    Τα προεκπαιδευμένα γλωσσικά μοντέλα έχουν κυριαρχήσει στην επεξεργασία φυσικής γλώσσας, αποτελώντας πρόκληση για τη χρήση γλωσσών αναπαράστασης γνώσης για την περιγραφή του κόσμου. Ενώ οι γλώσσες αυτές δεν είναι αρκετά εκφραστικές για να καλύψουν πλήρως τη φυσική γλώσσα, τα γλωσσικά μοντέλα έχουν ήδη δείξει σπουδαία αποτελέσματα όσον αφορά την κατανόηση και την ανάκτηση πληροφοριών απευθείας σε δεδομένα φυσικής γλώσσας. Διερευνούμε τις επιδόσεις των γλωσσικών μοντέλων για συλλογιστική φυσικής γλώσσας στη περιγραφική λογική ALC. Δημιουργούμε ένα σύνολο δεδομένων από τυχαίες βάσεις γνώσης ALC, μεταφρασμένες σε φυσική γλώσσα, ώστε να αξιολογήσουμε την ικανότητα των γλωσσικών μοντέλων να λειτουργούν ως συστήματα απάντησης ερωτήσεων πάνω σε βάσεις γνώσης φυσικής γλώσσας.Pretrained language models have dominated natural language processing, challenging the use of knowledge representation languages to describe the world. While these lan- guages are not expressive enough to fully cover natural language, language models have already shown great results in terms of understanding and information retrieval directly on natural language data. We explore language models’ performance at the downstream task of natural language reasoning in the description logic ALC. We generate a dataset of random ALC knowledge bases, translated in natural language, in order to assess the language models’ ability to function as question-answering systems over natural language knowledge bases

    Validation and Verification of Safety-Critical Systems in Avionics

    Get PDF
    This research addresses the issues of safety-critical systems verification and validation. Safety-critical systems such as avionics systems are complex embedded systems. They are composed of several hardware and software components whose integration requires verification and testing in compliance with the Radio Technical Commission for Aeronautics standards and their supplements (RTCA DO-178C). Avionics software requires certification before its deployment into an aircraft system, and testing is mandatory for certification. Until now, the avionics industry has relied on expensive manual testing. The industry is searching for better (quicker and less costly) solutions. This research investigates formal verification and automatic test case generation approaches to enhance the quality of avionics software systems, ensure their conformity to the standard, and to provide artifacts that support their certification. The contributions of this thesis are in model-based automatic test case generations approaches that satisfy MC/DC criterion, and bidirectional requirement traceability between low-level requirements (LLRs) and test cases. In the first contribution, we integrate model-based verification of properties and automatic test case generation in a single framework. The system is modeled as an extended finite state machine model (EFSM) that supports both the verification of properties and automatic test case generation. The EFSM models the control and dataflow aspects of the system. For verification, we model the system and some properties and ensure that properties are correctly propagated to the implementation via mandatory testing. For testing, we extended an existing test case generation approach with MC/DC criterion to satisfy RTCA DO-178C requirements. Both local test cases for each component and global test cases for their integration are generated. The second contribution is a model checking-based approach for automatic test case generation. In the third contribution, we developed an EFSM-based approach that uses constraints solving to handle test case feasibility and addresses bidirectional requirements traceability between LLRs and test cases. Traceability elements are determined at a low-level of granularity, and then identified, linked to their source artifact, created, stored, and retrieved for several purposes. Requirements’ traceability has been extensively studied but not at the proposed low-level of granularity

    Expectations and expertise in artificial intelligence: specialist views and historical perspectives on conceptualisation, promise, and funding

    Get PDF
    Artificial intelligence’s (AI) distinctiveness as a technoscientific field that imitates the ability to think went through a resurgence of interest post-2010, attracting a flood of scientific and popular expectations as to its utopian or dystopian transformative consequences. This thesis offers observations about the formation and dynamics of expectations based on documentary material from the previous periods of perceived AI hype (1960-1975 and 1980-1990, including in-between periods of perceived dormancy), and 25 interviews with UK-based AI specialists, directly involved with its development, who commented on the issues during the crucial period of uncertainty (2017-2019) and intense negotiation through which AI gained momentum prior to its regulation and relatively stabilised new rounds of long-term investment (2020-2021). This examination applies and contributes to longitudinal studies in the sociology of expectations (SoE) and studies of experience and expertise (SEE) frameworks, proposing a historical sociology of expertise and expectations framework. The research questions, focusing on the interplay between hype mobilisation and governance, are: (1) What is the relationship between AI practical development and the broader expectational environment, in terms of funding and conceptualisation of AI? (2) To what extent does informal and non-developer assessment of expectations influence formal articulations of foresight? (3) What can historical examinations of AI’s conceptual and promissory settings tell about the current rebranding of AI? The following contributions are made: (1) I extend SEE by paying greater attention to the interplay between technoscientific experts and wider collective arenas of discourse amongst non-specialists and showing how AI’s contemporary research cultures are overwhelmingly influenced by the hype environment but also contribute to it. This further highlights the interaction between competing rationales focusing on exploratory, curiosity-driven scientific research against exploitation-oriented strategies at formal and informal levels. (2) I suggest benefits of examining promissory environments in AI and related technoscientific fields longitudinally, treating contemporary expectations as historical products of sociotechnical trajectories through an authoritative historical reading of AI’s shifting conceptualisation and attached expectations as a response to availability of funding and broader national imaginaries. This comes with the benefit of better perceiving technological hype as migrating from social group to social group instead of fading through reductionist cycles of disillusionment; either by rebranding of technical operations, or by the investigation of a given field by non-technical practitioners. It also sensitises to critically examine broader social expectations as factors for shifts in perception about theoretical/basic science research transforming into applied technological fields. Finally, (3) I offer a model for understanding the significance of interplay between conceptualisations, promising, and motivations across groups within competing dynamics of collective and individual expectations and diverse sources of expertise

    Logics of Responsibility

    Get PDF
    The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems. A clear pattern in all these spheres is the intent of issuing standards for when---and to what extent---an agent should be held responsible for a state of affairs. This is where Logic lends a hand. The development of expressive logics---to reason about agents' decisions in situations with moral consequences---involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way. This thesis builds a formal theory of responsibility. The main tool used toward this aim is modal logic and, more specifically, a class of modal logics of action known as stit theory. The underlying motivation is to provide theoretical foundations for using symbolic techniques in the construction of ethical AI. Thus, this work means a contribution to formal philosophy and symbolic AI. The thesis's methodology consists in the development of stit-theoretic models and languages to explore the interplay between the following components of responsibility: agency, knowledge, beliefs, intentions, and obligations. Said models are integrated into a framework that is rich enough to provide logic-based characterizations for three categories of responsibility: causal, informational, and motivational responsibility. The thesis is structured as follows. Chapter 2 discusses at length stit theory, a logic that formalizes the notion of agency in the world over an indeterministic conception of time known as branching time. The idea is that agents act by constraining possible futures to definite subsets. On the road to formalizing informational responsibility, Chapter 3 extends stit theory with traditional epistemic notions (knowledge and belief). Thus, the chapter formalizes important aspects of agents' reasoning in the choice and performance of actions. In a context of responsibility attribution and excusability, Chapter 4 extends epistemic stit theory with measures of optimality of actions that underlie obligations. In essence, this chapter formalizes the interplay between agents' knowledge and what they ought to do. On the road to formalizing motivational responsibility, Chapter 5 adds intentions and intentional actions to epistemic stit theory and reasons about the interplay between knowledge and intentionality. Finally, Chapter 6 merges the previous chapters' formalisms into a rich logic that is able to express and model different modes of the aforementioned categories of responsibility. Technically, the most important contributions of this thesis lie in the axiomatizations of all the introduced logics. In particular, the proofs of soundness & completeness results involve long, step-by-step procedures that make use of novel techniques

    OWL Reasoners still useable in 2023

    Full text link
    In a systematic literature and software review over 100 OWL reasoners/systems were analyzed to see if they would still be usable in 2023. This has never been done in this capacity. OWL reasoners still play an important role in knowledge organisation and management, but the last comprehensive surveys/studies are more than 8 years old. The result of this work is a comprehensive list of 95 standalone OWL reasoners and systems using an OWL reasoner. For each item, information on project pages, source code repositories and related documentation was gathered. The raw research data is provided in a Github repository for anyone to use

    Mechanising Euler's use of infinitesimals in the proof of the Basel problem

    Get PDF
    In 1736 Euler published a proof of an astounding relation between π and the reciprocals of the squares. π²/6 = 1+ 1/4+ 1/9 + 1/25 … Until this point, π had not been part of any mathematical relation outside of geometry. This relation would have had an almost supernatural significance to the mathematicians of the time. But even more amazing is Euler's proof. He factorises a transcendental function as if it were a polynomial of infinite degree. He discards infinitely-many infinitely-small numbers. He substitutes 1 for the ratio of two distinct infinite numbers. Nowadays Euler's proof is held up as an example of both genius intuition and flagrantly unrigorous method. In this thesis we describe how, with the aid of nonstandard analysis, which gives a consistent formal theory of infinitely-small and large numbers, and the proof assistant Isabelle, we construct a partial formal proof of the Basel problem which follows the method of Euler's proof from his 'Introductio in Analysin Infinitorum'. We use our proof to demonstrate that Euler was systematic in his use of infinitely-large and infinitely-small numbers and did not make unjustified leaps of intuition. The concept of 'hidden lemmas' was developed by McKinzie and Tuckey based on Lakatos and Laugwitz to represent general principles which Euler's proof followed. We develop a theory of infinite 'hyperpolynomials' in Isabelle in order to formalise these hidden lemmas and we find that formal reconstruction of his proof using hidden lemmas is an effective way to discover the nuances in Euler's reasoning and demystify the controversial points. In conclusion, we find that Euler's reasoning was consistent and insightful, and yet has some distinct methodology to modern deductive proof

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field
    corecore