112 research outputs found

    A clinical decision support system for detecting and mitigating potentially inappropriate medications

    Get PDF
    Background: Medication errors are a leading cause of preventable harm to patients. In older adults, the impact of ageing on the therapeutic effectiveness and safety of drugs is a significant concern, especially for those over 65. Consequently, certain medications called Potentially Inappropriate Medications (PIMs) can be dangerous in the elderly and should be avoided. Tackling PIMs by health professionals and patients can be time-consuming and error-prone, as the criteria underlying the definition of PIMs are complex and subject to frequent updates. Moreover, the criteria are not available in a representation that health systems can interpret and reason with directly. Objectives: This thesis aims to demonstrate the feasibility of using an ontology/rule-based approach in a clinical knowledge base to identify potentially inappropriate medication(PIM). In addition, how constraint solvers can be used effectively to suggest alternative medications and administration schedules to solve or minimise PIM undesirable side effects. Methodology: To address these objectives, we propose a novel integrated approach using formal rules to represent the PIMs criteria and inference engines to perform the reasoning presented in the context of a Clinical Decision Support System (CDSS). The approach aims to detect, solve, or minimise undesirable side-effects of PIMs through an ontology (knowledge base) and inference engines incorporating multiple reasoning approaches. Contributions: The main contribution lies in the framework to formalise PIMs, including the steps required to define guideline requisites to create inference rules to detect and propose alternative drugs to inappropriate medications. No formalisation of the selected guideline (Beers Criteria) can be found in the literature, and hence, this thesis provides a novel ontology for it. Moreover, our process of minimising undesirable side effects offers a novel approach that enhances and optimises the drug rescheduling process, providing a more accurate way to minimise the effect of drug interactions in clinical practice

    Selectional Restriction Extraction for Frame-Based Knowledge Graph Augmentation

    Get PDF
    The Semantic Web is an ambitious project aimed at creating a global, machine-readable web of data, to enable intelligent agents to access and reason over this data. Ontologies are a key component of the Semantic Web, as they provide a formal description of the concepts and relationships in a particular domain. Exploiting the expressiveness of knowledge graphs together with a more logically sound ontological schema can be crucial to represent consistent knowledge and inferring new relations over the data. In other words, constraining the entities and predicates of knowledge graphs leads to improved semantics. The same benefits can be found for restrictions over linguistic resources, which are knowledge graphs used to represent natural language. More specifically, it is possible to specify constraints on the arguments that can be associated with a given frame, based on their semantic roles (selectional restrictions). However, most of the linguistic resources define very general restrictions because they must be able to represent different domains. Hence, the main research question tackled by this thesis is whether the use of domain-specific selectional restrictions is useful for ontology augmentation, ontology definition and neuro-symbolic tasks on knowledge graphs. To this end, we have developed a tool to empirically extract selectional restrictions and their probabilities. The obtained constraints are represented in OWL-Star and subsequently mapped into OWL: we show that the mapping is information preserving and invertible if certain conditions hold. The OWL ontologies are inserted inside Framester, an open lexical-semantic resource for the English language, resulting in an improved and augmented language resource hub. The use of selectional restrictions is also tested for ontology documentation and neuro-symbolic tasks, showing how they can be exploited to provide meaningful results

    An ontological framework for assisting people with disabilities in organisations

    Get PDF
    The aim of this thesis was to construct an ontological framework - the OntoCarer Ontological Framework, that is used in a software application to create an assistive social network for organisations and agents, which identifies the need for assistance for those with disabilities intending but unable to perform those actions, and searches for their assisters, and then directs and provides the relevant assistance to satisfy those intentions. This Ontological Framework was constructed using the languages of the Semantic Web including OWL and RDF. The agents involved were construed as BDI agents. The reason for constructing such an ontological framework and software application was that there was a gap in existing BDI theory whereby agents were expected to implement their own intentions. In the case of persons with disabilities this may not be the case, for them disability can mean an impairment of agency that requires assistance if the intention is to be fulfilled. To fulfil the aims of the project and thesis meant extending BDI theory to include people with disabilities, and creating Semantic Web representations of BDI concepts and creating a BDI software agent - the Organisation Agent that could act as intermediary between the assisted and assisters in an organisation, communicating via mobile phones. The Organisation Agent is able to identify when an action becomes an intention and whether the action is impaired so needing assistance, and if it does need assistance to find the assisting actions from those available, then select the optimal assister and its agent and direct it to the assisted agent. This is the OntoCarer Assistance Lifecycle. To do this it depends upon the OntoCarer Ontological Framework which are seven top-level OWL ontologies: Agent, Social Action, Body Components, Body Abilities, Organisation, Buildings, and OntoCarerLink-Ontology. The Framework is also compatible with the WHO ICF classifications. The OntoCarer-Link-Ontology is an ontology of properties used to link the assisted actions with the assister actions, and so create an assistive social network for an organisation. A methodology - the OntoCarer Ontological Framework Methodology was defined to construct the top-level ontologies, the domain ontology extensions for the Education domain, and an RDF model of a simulated college. A Java software application was created that implemented the Organisation Agent with a BDI architecture. Scenarios were run on the application to successfully test the execution of the OntoCarer Assistance Lifecycle on the college RDF model in order to validate the Framework

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Generic semantics-based task-oriented dialogue system framework for human-machine interaction in industrial scenarios

    Get PDF
    285 p.En Industria 5.0, los trabajadores y su bienestar son cruciales en el proceso de producción. En estecontexto, los sistemas de diálogo orientados a tareas permiten que los operarios deleguen las tareas mássencillas a los sistemas industriales mientras trabajan en otras más complejas. Además, la posibilidad deinteractuar de forma natural con estos sistemas reduce la carga cognitiva para usarlos y genera aceptaciónpor parte de los usuarios. Sin embargo, la mayoría de las soluciones existentes no permiten unacomunicación natural, y las técnicas actuales para obtener dichos sistemas necesitan grandes cantidadesde datos para ser entrenados, que son escasos en este tipo de escenarios. Esto provoca que los sistemas dediálogo orientados a tareas en el ámbito industrial sean muy específicos, lo que limita su capacidad de sermodificados o reutilizados en otros escenarios, tareas que están ligadas a un gran esfuerzo en términos detiempo y costes. Dados estos retos, en esta tesis se combinan Tecnologías de la Web Semántica contécnicas de Procesamiento del Lenguaje Natural para desarrollar KIDE4I, un sistema de diálogo orientadoa tareas semántico para entornos industriales que permite una comunicación natural entre humanos ysistemas industriales. Los módulos de KIDE4I están diseñados para ser genéricos para una sencillaadaptación a nuevos casos de uso. La ontología modular TODO es el núcleo de KIDE4I, y se encarga demodelar el dominio y el proceso de diálogo, además de almacenar las trazas generadas. KIDE4I se haimplementado y adaptado para su uso en cuatro casos de uso industriales, demostrando que el proceso deadaptación para ello no es complejo y se beneficia del uso de recursos

    Vers une description évolutive et une exploration efficace des concepts et des artefacts d'architecture microservices

    Get PDF
    RÉSUMÉ : L'adoption de l'architecture Microservices (MSA) pour la conception de systèmes logiciels est une tendance en industrie et en recherche. De nature compositionnelle et distribuée, les systèmes basés sur l'architecture Microservices sont composés de services ayant une responsabilité restreinte et bien définie, visant un isolement complet dans une perspective de non-partage de ressources. Les systèmes basés sur des microservices sont souvent classés comme de systèmes « Cloud-Native ». L'adoption de l'architecture Microservices représente un changement de paradigme technologique et managérial comportant des défis, notamment : la taille, la portée et le nombre de services, et leurs interopérabilité et réutilisation. Outre ces défis, la compréhension, l'adoption et l'implémentation des principes fondamentaux de ce style architectural sont des challenges qui impactent la conception d'architectures microservices efficaces et cohérentes. En effet, l'absence d'un large consensus sur certains principes et termes clés de cette architecture mènent à sa mauvaise compréhension et par conséquent à des implémentations incorrectes. Cette absence de consensus est une manifestation concrète de l'immaturité de cette architecture qui mène à des défis lors de la formalisation des connaissances. Également, il manque une méthode uniforme capable de supporter les concepteurs lors de la modélisation des microservices, notamment dans l'agencement des différentes composantes. À cela s'ajoute l'absence de modèles conceptuels pouvant guider les ingénieurs dans les premières phases de conception de ces systèmes. Plusieurs approches ont été utilisées pour la modélisation d'architectures microservices, tels que : formelle et informelle, manuelle et automatique et toutes les combinaisons de ces quatre, mais ces approches ne répondent pas à tous les défis rencontrés par les concepteurs. Pour faciliter la modélisation des microservices et rendre le processus plus efficace, il est nécessaire de développer des approches de conception et de représentation alternatives. Dans cette perspective, nous proposons une approche ontologique capable de répondre autant aux défis de conception que de représentation des architectures microservices. Dans ce mémoire, nous vous présentons nos résultats de recherche dont la principale contribution est une ontologie du domaine des architectures Microservices définie en suivant les principes de logique de description et formalisée en utilisant le langage « Web Ontology Language » (OWL), une technologie clé du Web sémantique. À cette ontologie nous avons donné le nom d'« Ontology of Microservices Architecture Concepts » (OMSAC). OMSAC contient suffisamment de vocabulaire pour décrire les concepts qui définissent l'architecture Microservices et pour représenter les différents artefacts composant ces architectures. Sa structure permet une évolution rapide et est capable de prendre en charge les enjeux liés à l'immaturité actuelle de ces architectures. En tant que technologie d'intelligence artificielle (IA), les ontologies possèdent des capacités de raisonnement avancées auxquelles il est possible d'ajouter d'autres technologies pour les étendre et ainsi répondre à différents besoins. Avec cet objectif, nous avons utilisé OMSAC conjointement avec des techniques d'apprentissage machine pour modéliser et analyser des architectures microservices afin de calculer le degré de similitude entre différents microservices appartenant à différents systèmes. Ce cas d'utilisation d'OMSAC constitue une contribution supplémentaire de notre recherche et renforce les perspectives de recherche dans l'assistance, l'outillage et l'automatisation de la modélisation des architectures microservices. Cette contribution montre également la pertinence de la recherche de mécanismes permettant de faire de l'analytique avancée sur les modèles d'architectures. Dans des travaux de recherche futurs, nous nous intéresserons au développent de ces mécanismes, et planifions la conception d'un assistant intelligent capable de projeter des architectures microservices basées sur les meilleures pratiques et favorisant la réutilisation de microservices existants. Également, nous souhaitons développer un langage dédié afin d'abstraire les syntaxes d'OWL et du langage de requête SPARQL pour faciliter l'utilisation d'OMSAC par les concepteurs, ingénieurs et programmeurs qui ne sont pas familiers avec ces technologies du Web sémantique. -- Mot(s) clé(s) en français : Architectures microservices, ontologies, modélisation de systèmes logiciels, apprentissage automatique. -- ABSTRACT : The use of Microservices Architecture (MSA) for designing software systems has become a trend in industry and research. Adopting MSA represents a technological and managerial shift with challenges including the size, scope, number, interoperability and reuse of microservices, modelling using multi-viewpoints, as well as the adequate understanding, adoption, and implementation of fundamental principles of the Microservices Architecture. Adequately undertaking these challenges is mandatory for designing effective MSA-based systems. In this thesis, we explored an ontological representation of the knowledge concerning the Microservices Architecture domain. This representation is capable of addressing MSA understanding and modelling challenges. As a result of this research, we propose the Ontology of Microservices Architecture Concepts (OMSAC), which is a domain ontology containing enough vocabulary to describe MSA concepts and artifacts and in a form to allow fast evolution and advanced analytical capabilities. -- Mot(s) clé(s) en anglais : Microservices Architecture, Ontologies, Conceptual modelling, machine learning

    Development of a Framework for Ontology Population Using Web Scraping in Mechatronics

    Get PDF
    One of the major challenges in engineering contexts is the efficient collection, management, and sharing of data. To address this problem, semantic technologies and ontologies are potent assets, although some tasks, such as ontology population, usually demand high maintenance effort. This thesis proposes a framework to automate data collection from sparse web resources and insert it into an ontology. In the first place, a product ontology is created based on the combination of several reference vocabularies, namely GoodRelations, the Basic Formal Ontology, ECLASS stan- dard, and an information model. Then, this study introduces a general procedure for developing a web scraping agent to collect data from the web. Subsequently, an algorithm based on lexical similarity measures is presented to map the collected data to the concepts of the ontology. Lastly, the collected data is inserted into the ontology. To validate the proposed solution, this thesis implements the previous steps to collect information about microcontrollers from three differ- ent websites. Finally, the thesis evaluates the use case results, draws conclusions, and suggests promising directions for future research

    Development of a semantic knowledge modelling approach for evaluating offsite manufacturing production processes

    Get PDF
    The housing sector in the UK and across the globe is constantly under pressure to deliver enough affordable houses to meet the increasing demand. Offsite Manufacturing (OSM), a modern method of construction, is considered to be a key aspect in meeting these demands given its potential to increase efficiency and boost productivity. Although the use of OSM to increase the supply of affordable and efficient homes is getting popular, the focus has been on ‘what’ methods of construction are used (i.e. whether implementing OSM or traditional approach) rather than ‘how’ the alternative construction approach shall be done (i.e. choice of OSM method to meet set objectives). There have been criticisms of the approaches used by professionals implementing OSM methods as some of these approaches are non-structured and these methods have been criticised for being similar to the conventional onsite methods with little process gains. There are previous studies that have compared the performance of OSM and other modern methods of construction with conventional methods of construction. However, there is hardly any attempt nor quantitative evidence comparing the performance of various competing OSM approaches (i.e. methods with standardised and non-standardised processes) in order to support stakeholders in making an informed decision on choices of methods. In pursuit of the research gap identified, this research aims to develop a proof-of-concept knowledge-based process analysis tool that would enable OSM practitioners to efficiently evaluate the performances of their choice of OSM methods to support informed decision-making and continuous improvement. To achieve this aim, an ontology knowledge modelling approach was adopted for leveraging data and information sources with semantics, and an offsite production workflow (OPW) ontology was developed to enable a detailed analysis of OSM production methods. The research firstly undertook an extensive critical review of the OSM domain to identify the existing OSM knowledge and how this knowledge can be formalised to aid communication in the OSM domain. In addition, a separate review of process analysis methods and knowledge-based modelling methods was done concurrently to identify the suitable approach for analysing and systemising OSM knowledge respectively. The lean manufacturing value system analysis (VSA) approach was used for the analysis in this study using two units of analysis consisting of an example of atypical non-standardised (i.e. static method of production) and standardised (i.e. semi-automated method of production) OSM methods. The knowledge systematisation was done using an ontology knowledge modelling approach to develop the process analysis tool – OPW ontology. The OPW ontology was further evaluated by mapping a case of lightweight steel frame modular house production to model a real-life context. A two-staged validation approach was then implemented to test the ontology which consists of firstly an internal validation of logic and consistency of the results and then an expert validation process using an industry-approved set of criteria. The result from the study revealed that the non-standardised ad-hoc OSM production method, involving a significant amount of manual tasks, contributes little process improvement from the conventional onsite method when using the metrics of process time and cost. In comparison with the structured method e.g. semi-automated OSM production method, it is discovered that the process cost and time are 82% and 77% more in the static method respectively based on a like-to-like production schedule. The study also evaluates the root causes of process wastes, accounting for non-value-added time and cost consumed. The results contribute to supporting informed decision-making on the choices of OSM production methods for continuous improvement. The main contributions to knowledge and practice are as follows: i. The output of this research contributes to the body of literature on offsite concepts, definition and classification, through the generic classification framework developed for the OSM domain. This provides a means of supporting clear communication and knowledge sharing in the domain and supports knowledge systematisation. ii. The approach used in this research, integrating the value system analysis (VSA) and activity-based costing (ABC) methods for process analysis is a novel approach that bridges that gaps with the use of the ABC method for generating detailed process-related data to support cost/time-based analysis of OSM processes. iii. The developed generic process map which represents the OSM production process captures activity sequences, resources and information flow within the process will help in disseminating knowledge on OSM and improve best practices in the industry. iv. The developed process analysis tool (the OPW ontology) has been tested with a real-life OSM project and validated by domain experts to be a competent tool. The knowledge structure and rules integrated into the OPW ontology have been published on the web for knowledge sharing and re-use. This tool can be adapted by OSM practitioners to develop a company-specific tool that captures their specific business processes, which can then support the evaluation of their processes to enable continuous improvement

    Scaling the development of large ontologies : identitas and hypernormalization

    Get PDF
    PhD ThesisDuring the last decade ontologies have become a fundamental part of the life sciences to build organised computational knowledge. Currently, there are more than 800 biomedical ontologies hosted by the NCBO BioPortal repository. However, the proliferation of ontologies in the biomedical and biological domains has highlighted a number of problems. As ontologies become large, their development and maintenance becomes more challenging and time-consuming. Therefore, the scalability of ontology development has become problematic. In this thesis, we examine two new approaches that can help address this challenge. First, we consider a new approach to identi ers that could signi cantly facilitate the scalability of ontologies and overcome some related issues with monotonic, numeric identi ers while remaining semantics-free. Our solutions are described, along with the Identitas library, which allows concurrent development, pronounceability and error checking. The library integrated into two ontology development environments, Prot eg e and Tawny-OWL. This thesis also discusses the ways in which current ontological practices could be migrated towards the use of this scheme. Second, we investigate the usage of the hypernormalisation, patternisation and programatic approaches by asking how we could use this approach to rebuild the Gene Ontology (GO). The aim of the hypernormalisation and patternisation techniques is to allow the ontology developer to manage its maintainability and evolution. To apply this approach we had to analyse the ontology structure, starting with the Molecular Function Ontology (MFO). The MFO is formed from several large and tangled hierarchies of classes, each of which describe a broad molecular activity. The exploitation of the hypernormalisation approach resulted in the creation of a hypernormalised form of the Transporter Activity (TA) and Catalytic Activity (CA) hierarchies, together they constitute 78% of all classes in MFO. The hypernormalised structure of the TA and CA are generated based on developed higher-level patterns and novel content-speci c patterns, and exploit ontology logical reasoners. The gen- erated ontologies are robust, easy to maintain and can be developed and extended freely. Although, there are a variety of ontologies development tools, Tawny-OWL is a programmatic interactive tool for ontology creation and management and provides a set of patterns that explicitly support the creation of a hypernormalised ontology. Finally, the investigation of the hypernormalisation highlighted inconsistent classi- cations and identi cation of signi cant semantic mismatch between GO and the Chemical Entities of Biological Interest (ChEBI). Although both ontologies describe the same real entities, GO often refers to the form most common in biology, while ChEBI is more speci c and precise. The use of hypernormalisation forces us to deal with this mismatch, we used the equivalence axioms created by the GO-Plus ontology. To sum up, to address the scalability and ease development of ontologies we propose a new identi er scheme and investigate the use of the hypernormalisation methodology. Together, the Identitas and the hypernormalisation technique should enable the construction of large-scale ontologies in the future.Northern Borders University, Saudi Arabia
    • …
    corecore