103 research outputs found

    VOAR: A Visual and Integrated Ontology Alignment Environment

    Get PDF
    International audienceOntology alignment is a key process for enabling interoperability between ontology-based systems in the Linked Open Data age. From two input ontologies, this process generates an alignment (set of correspondences) between them. In this paper we present VOAR, a new web-based environment for ontology alignment visualization and manipulation. Within this graphical environment, users can manually create/edit correspondences and apply a set of operations on alignments (filtering, merge, difference, etc.). VOAR allows invoking external ontology matching systems that implement a specific alignment interface, so that the generated alignments can be manipulated within the environment. Evaluating multiple alignments together against a reference one can also be carried out, using classical evaluation metrics (precision, recall and f-measure). The status of each correspondence with respect to its presence or absence in reference alignment is visually represented. Overall, the main new aspect of VOAR is the visualization and manipulation of alignments at schema level, in an integrated, visual and web-based environment

    Aligning Controlled vocabularies for enabling semantic matching in a distributed knowledge management system

    Get PDF
    The underlying idea of the Semantic Web is that web content should be expressed not only in natural language but also in a language that can be unambiguously understood, interpreted and used by software agents, thus permitting them to find, share and integrate information more easily. The central notion of the Semantic Web's syntax are ontologies, shared vocabularies providing taxonomies of concepts, objects and relationships between them, which describe particular domains of knowledge. A vocabulary stores words, synonyms, word sense definitions (i.e. glosses), relations between word senses and concepts; such a vocabulary is generally referred to as the Controlled Vocabulary (CV) if choice or selection of terms are done by domain specialists. A facet is a distinct and dimensional feature of a concept or a term that allows a taxonomy, ontology or CV to be viewed or ordered in multiple ways, rather than in a single way. The facet is clearly defined, mutually exclusive, and composed of collectively exhaustive properties or characteristics of a domain. For example, a collection of rice might be represented using a name facet, place facet etc. This thesis presents a methodology for producing mappings between Controlled Vocabularies, based on a technique called \Hidden Semantic Matching". The \Hidden" word stands for it not relying on any sort of externally provided background knowledge. The sole exploited knowledge comes from the \semantic context" of the same CVs which are being matched. We build a facet for each concept of these CVs, considering more general concepts (broader terms), less general concepts (narrow terms) or related concepts (related terms).Together these form a concept facet (CF) which is then used to boost the matching process

    Lightweight information integration through partial mapping and query reformulation

    Get PDF
    [no abstract

    Software-based dialogue systems: Survey, taxonomy and challenges

    Get PDF
    The use of natural language interfaces in the field of human-computer interaction is undergoing intense study through dedicated scientific and industrial research. The latest contributions in the field, including deep learning approaches like recurrent neural networks, the potential of context-aware strategies and user-centred design approaches, have brought back the attention of the community to software-based dialogue systems, generally known as conversational agents or chatbots. Nonetheless, and given the novelty of the field, a generic, context-independent overview on the current state of research of conversational agents covering all research perspectives involved is missing. Motivated by this context, this paper reports a survey of the current state of research of conversational agents through a systematic literature review of secondary studies. The conducted research is designed to develop an exhaustive perspective through a clear presentation of the aggregated knowledge published by recent literature within a variety of domains, research focuses and contexts. As a result, this research proposes a holistic taxonomy of the different dimensions involved in the conversational agents’ field, which is expected to help researchers and to lay the groundwork for future research in the field of natural language interfaces.With the support from the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia and the European Social Fund. The corresponding author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the inancial support of his predoctoral grant FPI-UPC. This paper has been funded by the Spanish Ministerio de Ciencia e Innovación under project / funding scheme PID2020-117191RB-I00 / AEI/10.13039/501100011033.Peer ReviewedPostprint (author's final draft

    Formal description and automatic generation of learning spaces based on ontologies

    Get PDF
    Tese de Doutoramento em InformaticsA good Learning Space (LS) should convey pertinent information to the visitors at the most adequate time and location to favor their knowledge acquisition. This statement justifies the relevance of virtual Learning Spaces. Considering the consolidation of the Internet and the improvement of the interaction, searching, and learning mechanisms, this work proposes a generic architecture, called CaVa, to create Virtual Learning Spaces building upon cultural institution documents. More precisely, the proposal is to automatically generate ontology-based virtual learning environments from document repositories. Thus, to impart relevant learning materials to the virtual LS, this proposal is based on using ontologies to represent the fundamental concepts and semantic relations in a user- and machine-understandable format. These concepts together with the data (extracted from the real documents) stored in a digital repository are displayed in a web-based LS that enables the visitors to use the available features and tools to learn about a specific domain. According to the approach here discussed, each desired virtual LS must be specified rigorously through a Domain-Specific Language (DSL), called CaVaDSL, designed and implemented in this work. Furthermore, a set of processors (generators) was developed. These generators have the duty, receiving a CaVaDSL specification as input, of transforming it into several web scripts to be recognized and rendered by a web browser, producing the final virtual LS. Aiming at validating the proposed architecture, three real case studies – (1) Emigration Documents belonging to Fafe’s Archive; (2) The prosopographical repository of the Fasti Ecclesiae Portugaliae project; and (3) Collection of life stories of the Museum of the Person – were used. These real scenarios are actually relevant as they promote the digital preservation and dissemination of Cultural Heritage, contributing to human welfare.Um bom Espaço de Aprendizagem (LS – Learning Space) deve transmitir informações pertinentes aos visitantes no horário e local mais adequados para favorecer a aquisição de conhecimento. Esta afirmação justifica a relevância dos Espaços virtuais de Aprendizagem. Considerando a consolidação da Internet e o aprimoramento dos mecanismos de interação, busca e aprendizagem, este trabalho propõe uma arquitetura genérica, denominada CaVa, para a criação de Espaços virtuais de Aprendizagem baseados em documentos de instituições culturais. Mais precisamente, a proposta é gerar automaticamente ambientes de aprendizagem virtual baseados em ontologias a partir de repositórios de documentos. Assim, para transmitir materiais de aprendizagem relevantes para o LS virtual, esta proposta é baseada no uso de ontologias para representar os conceitos fundamentais e as relações semânticas em um formato compreensível pelo usuário e pela máquina. Esses conceitos, juntamente com os dados (extraídos dos documentos reais) armazenados em um repositório digital, são exibidos em um LS baseado na web que permite aos visitantes usarem os recursos e ferramentas disponíveis para aprenderem sobre um domínio espec ífico. Cada LS virtual desejado deve ser especificado rigorosamente por meio de uma Linguagem de Domínio Específico (DSL), chamada CaVaDSL, projetada e implementada neste trabalho. Além disso, um conjunto de processadores (geradores) foi desenvolvido. Esses geradores têm o dever de receber uma especificação CaVaDSL como entrada e transformá-la em diversos web scripts para serem reconhecidos e renderizados por um navegador, produzindo o LS virtual final. Visando validar a arquitetura proposta, três estudos de caso reais foram usados. Esses cenários reais são realmente relevantes, pois promovem a preservação digital e a disseminação do Património Cultural, contribuindo para o bem-estar humano

    eXplainable AI for trustworthy healthcare applications

    Get PDF
    Acknowledging that AI will inevitably become a central element of clinical practice, this thesis investigates the role of eXplainable AI (XAI) techniques in developing trustworthy AI applications in healthcare. The first part of this thesis focuses on the societal, ethical, and legal aspects of the use of AI in healthcare. It first compares the different approaches to AI ethics worldwide and then focuses on the practical implications of the European ethical and legal guidelines for AI applications in healthcare. The second part of the thesis explores how XAI techniques can help meet three key requirements identified in the initial analysis: transparency, auditability, and human oversight. The technical transparency requirement is tackled by enabling explanatory techniques to deal with common healthcare data characteristics and tailor them to the medical field. In this regard, this thesis presents two novel XAI techniques that incrementally reach this goal by first focusing on multi-label predictive algorithms and then tackling sequential data and incorporating domainspecific knowledge in the explanation process. This thesis then analyzes the ability to leverage the developed XAI technique to audit a fictional commercial black-box clinical decision support system (DSS). Finally, the thesis studies AI explanation’s ability to effectively enable human oversight by studying the impact of explanations on the decision-making process of healthcare professionals

    JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights

    Get PDF
    In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)

    Knowledge base exchange: the case of OWL 2 QL

    Get PDF
    In this article, we define and study the problem of exchanging knowledge between a source and a target knowledge base (KB), connected through mappings. Differently from the traditional database exchange setting, which considers only the exchange of data, we are interested in exchanging implicit knowledge. As representation formalism we use Description Logics (DLs), thus assuming that the source and target KBs are given as a DL TBox+ABox, while the mappings have the form of DL TBox assertions. We define a general framework of KB exchange, and study the problem of translating the knowledge in the source KB according to the mappings expressed in OWL 2 QL, the profile of the standard Web Ontology Language OWL 2 based on the description logic DL-LiteR. We develop novel game- and automata-theoretic techniques, and we provide complexity results that range from NLogSpace to ExpTim

    Usability analysis of contending electronic health record systems

    Get PDF
    In this paper, we report measured usability of two leading EHR systems during procurement. A total of 18 users participated in paired-usability testing of three scenarios: ordering and managing medications by an outpatient physician, medicine administration by an inpatient nurse and scheduling of appointments by nursing staff. Data for audio, screen capture, satisfaction rating, task success and errors made was collected during testing. We found a clear difference between the systems for percentage of successfully completed tasks, two different satisfaction measures and perceived learnability when looking at the results over all scenarios. We conclude that usability should be evaluated during procurement and the difference in usability between systems could be revealed even with fewer measures than were used in our study. © 2019 American Psychological Association Inc. All rights reserved.Peer reviewe
    • …
    corecore