142 research outputs found

    Outlier concepts auditing methodology for a large family of biomedical ontologies

    Get PDF
    Background Summarization networks are compact summaries of ontologies. The “Big Picture” view offered by summarization networks enables to identify sets of concepts that are more likely to have errors than control concepts. For ontologies that have outgoing lateral relationships, we have developed the "partial-area taxonomy" summarization network. Prior research has identified one kind of outlier concepts, concepts of small partials-areas within partial-area taxonomies. Previously we have shown that the small partial-area technique works successfully for four ontologies (or their hierarchies). Methods To improve the Quality Assurance (QA) scalability, a family-based QA framework, where one QA technique is potentially applicable to a whole family of ontologies with similar structural features, was developed. The 373 ontologies hosted at the NCBO BioPortal in 2015 were classified into a collection of families based on structural features. A meta-ontology represents this family collection, including one family of ontologies having outgoing lateral relationships. The process of updating the current meta-ontology is described. To conclude that one QA technique is applicable for at least half of the members for a family F, this technique should be demonstrated as successful for six out of six ontologies in F. We describe a hypothesis setting the condition required for a technique to be successful for a given ontology. The process of a study to demonstrate such success is described. This paper intends to prove the scalability of the small partial-area technique. Results We first updated the meta-ontology classifying 566 BioPortal ontologies. There were 371 ontologies in the family with outgoing lateral relationships. We demonstrated the success of the small partial-area technique for two ontology hierarchies which belong to this family, SNOMED CT’s Specimen hierarchy and NCIt’s Gene hierarchy. Together with the four previous ontologies from the same family, we fulfilled the “six out of six” condition required to show the scalability for the whole family. Conclusions We have shown that the small partial-area technique can be potentially successful for the family of ontologies with outgoing lateral relationships in BioPortal, thus improve the scalability of this QA technique

    Combining multivariate statistics and the think-aloud protocol to assess Human-Computer Interaction barriers in symptom checkers

    Get PDF
    [EN] Symptom checkers are software tools that allow users to submit a set of symptoms and receive advice related to them in the form of a diagnosis list, health information or triage. The heterogeneity of their potential users and the number of different components in their user interfaces can make testing with end-users unaffordable. We designed and executed a two-phase method to test the respiratory diseases module of the symptom checker Erdusyk. Phase I consisted of an online test with a large sample of users (n = 53). In Phase I, users evaluated the system remotely and completed a questionnaire based on the Technology Acceptance Model. Principal Component Analysis was used to correlate each section of the interface with the questionnaire responses, thus identifying which areas of the user interface presented significant contributions to the technology acceptance. In the second phase, the think-aloud procedure was executed with a small number of samples (n = 15), focusing on the areas with significant contributions to analyze the reasons for such contributions. Our method was used effectively to optimize the testing of symptom checker user interfaces. The method allowed kept the cost of testing at reasonable levels by restricting the use of the think-aloud procedure while still assuring a high amount of coverage. The main barriers detected in Erdusyk were related to problems understanding time repetition patterns, the selection of levels in scales to record intensities, navigation, the quantification of some symptom attributes, and the characteristics of the symptoms. (C) 2017 Elsevier Inc. All rights reserved.This work was supported by Helse Nord [grant HST1121-13], the Faculty of Health Sciences from UIT The Arctic University of Norway [researcher code 1108], and The Research Council of Norway [grant 248150/O70]. We thank Professor Emeritus Rafael Romero-Villafranca for reviewing the statistical analysis of this paper.Marco-Ruiz, L.; Bones, E.; De La Asuncion, E.; Gabarron, E.; Aviles-Solis, JC.; Lee, E.; Traver Salcedo, V.... (2017). Combining multivariate statistics and the think-aloud protocol to assess Human-Computer Interaction barriers in symptom checkers. Journal of Biomedical Informatics. 74:104-122. https://doi.org/10.1016/j.jbi.2017.09.002S1041227

    CDC DENV-1-4 : real-time RT-PCR assay for detection and serotype identification of dengue virus : instructions for use package insert

    Get PDF
    The CDC DENV-1-4 Real-Time RT-PCR Assay is intended for use on an Applied Biosystems (ABI) 7500 Fast Dx Real-Time PCR Instrument:\ue2\u20ac\ua2 For the diagnosis of dengue in serum or plasma collected from patients with signs and symptoms consistent with dengue (mild or severe) during the acute phase;\ue2\u20ac\ua2 For the identification of dengue virus serotypes 1, 2, 3 or 4 from viral RNA in serum or plasma (sodium citrate) collected from human patients with dengue during the acute phase;\ue2\u20ac\ua2 To provide epidemiologic information for surveillance of circulating dengue viruses.Testing of clinical blood specimens (serum or plasma) with the CDC DENV-1-4 Real-Time RT-PCR Assay should not be performed unless the patient meets clinical and/or epidemiologic criteria for testing suspect dengue cases.The CDC DENV-1-4 Real-Time RT-PCR Assay is not FDA cleared or approved for the screening of blood or plasma donors.Negative results obtained with this test do not preclude the diagnosis of dengue and should not be used as the sole basis for treatment or other patient management decisions.Dengue (pronounced den' gee) virus -- Where is dengue common? -- Dengue in Puerto Rico -- Symptoms -- Diagnosis -- Treatment -- Prevention -- What are CDC and the Puerto Rico Department of Health (PRDH) doing to control dengue

    Applying Process-Oriented Data Science to Dentistry

    Get PDF
    Background: Healthcare services now often follow evidence-based principles, so technologies such as process and data mining will help inform their drive towards optimal service delivery. Process mining (PM) can help the monitoring and reporting of this service delivery, measure compliance with guidelines, and assess effectiveness. In this research, PM extracts information about clinical activity recorded in dental electronic health records (EHRs) converts this into process-models providing stakeholders with unique insights to the dental treatment process. This thesis addresses a gap in prior research by demonstrating how process analytics can enhance our understanding of these processes and the effects of changes in strategy and policy over time. It also emphasises the importance of a rigorous and documented methodological approach often missing from the published literature. Aim: Apply the emerging technology of PM to an oral health dataset, illustrating the value of the data in the dental repository, and demonstrating how it can be presented in a useful and actionable manner to address public health questions. A subsidiary aim is to present the methodology used in this research in a way that provides useful guidance to future applications of dental PM. Objectives: Review dental and healthcare PM literature establishing state-of-the-art. Evaluate existing PM methods and their applicability to this research’s dataset. Extend existing PM methods achieving the aims of this research. Apply PM methods to the research dataset addressing public health questions. Document and present this research’s methodology. Apply data-mining, PM, and data-visualisation to provide insights into the variable pathways leading to different outcomes. Identify the data needed for PM of a dental EHR. Identify challenges to PM of dental EHR data. Methods: Extend existing PM methods to facilitate PM research in public health by detailing how data extracts from a dental EHR can be effectively managed, prepared, and used for PM. Use existing dental EHR and PM standards to generate a data reference model for effective PM. Develop a data-quality management framework. Results: Comparing the outputs of PM to established care-pathways showed that the dataset facilitated generation of high-level pathways but was less suitable for detailed guidelines. Used PM to identify the care pathway preceding a dental extraction under general anaesthetic and provided unique insights into this and the effects of policy decisions around school dental screenings. Conclusions: Research showed that PM and data-mining techniques can be applied to dental EHR data leading to fresh insights about dental treatment processes. This emerging technology along with established data mining techniques, should provide valuable insights to policy makers such as principal and chief dental officers to inform care pathways and policy decisions

    Scaling the development of large ontologies : identitas and hypernormalization

    Get PDF
    PhD ThesisDuring the last decade ontologies have become a fundamental part of the life sciences to build organised computational knowledge. Currently, there are more than 800 biomedical ontologies hosted by the NCBO BioPortal repository. However, the proliferation of ontologies in the biomedical and biological domains has highlighted a number of problems. As ontologies become large, their development and maintenance becomes more challenging and time-consuming. Therefore, the scalability of ontology development has become problematic. In this thesis, we examine two new approaches that can help address this challenge. First, we consider a new approach to identi ers that could signi cantly facilitate the scalability of ontologies and overcome some related issues with monotonic, numeric identi ers while remaining semantics-free. Our solutions are described, along with the Identitas library, which allows concurrent development, pronounceability and error checking. The library integrated into two ontology development environments, Prot eg e and Tawny-OWL. This thesis also discusses the ways in which current ontological practices could be migrated towards the use of this scheme. Second, we investigate the usage of the hypernormalisation, patternisation and programatic approaches by asking how we could use this approach to rebuild the Gene Ontology (GO). The aim of the hypernormalisation and patternisation techniques is to allow the ontology developer to manage its maintainability and evolution. To apply this approach we had to analyse the ontology structure, starting with the Molecular Function Ontology (MFO). The MFO is formed from several large and tangled hierarchies of classes, each of which describe a broad molecular activity. The exploitation of the hypernormalisation approach resulted in the creation of a hypernormalised form of the Transporter Activity (TA) and Catalytic Activity (CA) hierarchies, together they constitute 78% of all classes in MFO. The hypernormalised structure of the TA and CA are generated based on developed higher-level patterns and novel content-speci c patterns, and exploit ontology logical reasoners. The gen- erated ontologies are robust, easy to maintain and can be developed and extended freely. Although, there are a variety of ontologies development tools, Tawny-OWL is a programmatic interactive tool for ontology creation and management and provides a set of patterns that explicitly support the creation of a hypernormalised ontology. Finally, the investigation of the hypernormalisation highlighted inconsistent classi- cations and identi cation of signi cant semantic mismatch between GO and the Chemical Entities of Biological Interest (ChEBI). Although both ontologies describe the same real entities, GO often refers to the form most common in biology, while ChEBI is more speci c and precise. The use of hypernormalisation forces us to deal with this mismatch, we used the equivalence axioms created by the GO-Plus ontology. To sum up, to address the scalability and ease development of ontologies we propose a new identi er scheme and investigate the use of the hypernormalisation methodology. Together, the Identitas and the hypernormalisation technique should enable the construction of large-scale ontologies in the future.Northern Borders University, Saudi Arabia
    • …
    corecore