15,308 research outputs found

    Knowledge Rich Natural Language Queries over Structured Biological Databases

    Full text link
    Increasingly, keyword, natural language and NoSQL queries are being used for information retrieval from traditional as well as non-traditional databases such as web, document, image, GIS, legal, and health databases. While their popularity are undeniable for obvious reasons, their engineering is far from simple. In most part, semantics and intent preserving mapping of a well understood natural language query expressed over a structured database schema to a structured query language is still a difficult task, and research to tame the complexity is intense. In this paper, we propose a multi-level knowledge-based middleware to facilitate such mappings that separate the conceptual level from the physical level. We augment these multi-level abstractions with a concept reasoner and a query strategy engine to dynamically link arbitrary natural language querying to well defined structured queries. We demonstrate the feasibility of our approach by presenting a Datalog based prototype system, called BioSmart, that can compute responses to arbitrary natural language queries over arbitrary databases once a syntactic classification of the natural language query is made

    The successes and challenges of harmonising juvenile idiopathic arthritis (JIA) datasets to create a large-scale JIA data resource

    Get PDF
    Background CLUSTER is a UK consortium focussed on precision medicine research in JIA/JIA-Uveitis. As part of this programme, a large-scale JIA data resource was created by harmonizing and pooling existing real-world studies. Here we present challenges and progress towards creation of this unique large JIA dataset. Methods Four real-world studies contributed data; two clinical datasets of JIA patients starting first-line methotrexate (MTX) or tumour necrosis factor inhibitors (TNFi) were created. Variables were selected based on a previously developed core dataset, and encrypted NHS numbers were used to identify children contributing similar data across multiple studies. Results Of 7013 records (from 5435 individuals), 2882 (1304 individuals) represented the same child across studies. The final datasets contain 2899 (MTX) and 2401 (TNFi) unique patients; 1018 are in both datasets. Missingness ranged from 10 to 60% and was not improved through harmonisation. Conclusions Combining data across studies has achieved dataset sizes rarely seen in JIA, invaluable to progressing research. Losing variable specificity and missingness, and their impact on future analyses requires further consideration

    Emerging Paradigms in Genomics-Based Crop Improvement

    Get PDF
    Next generation sequencing platforms and high-throughput genotyping assays have remarkably expedited the pace of development of genomic tools and resources for several crops. Complementing the technological developments, conceptual shifts have also been witnessed in designing experimental populations. Availability of second generation mapping populations encompassing multiple alleles, multiple traits, and extensive recombination events is radically changing the phenomenon of classical QTL mapping. Additionally, the rising molecular breeding approaches like marker assisted recurrent selection (MARS) that are able to harness several QTLs are of particular importance in obtaining a “designed” genotype carrying the most desirable combinations of favourable alleles. Furthermore, rapid generation of genome-wide marker data coupled with easy access to precise and accurate phenotypic screens enable large-scale exploitation of LD not only to discover novel QTLs via whole genome association scans but also to practise genomic estimated breeding value (GEBV)-based selection of genotypes. Given refinements being experienced in analytical methods and software tools, the multiparent populations will be the resource of choice to undertake genome wide association studies (GWAS), multiparent MARS, and genomic selection (GS). With this, it is envisioned that these high-throughput and high-power molecular breeding methods would greatly assist in exploiting the enormous potential underlying breeding by design approach to facilitate accelerated crop improvement

    Advanced Methods for Entity Linking in the Life Sciences

    Get PDF
    The amount of knowledge increases rapidly due to the increasing number of available data sources. However, the autonomy of data sources and the resulting heterogeneity prevent comprehensive data analysis and applications. Data integration aims to overcome heterogeneity by unifying different data sources and enriching unstructured data. The enrichment of data consists of different subtasks, amongst other the annotation process. The annotation process links document phrases to terms of a standardized vocabulary. Annotated documents enable effective retrieval methods, comparability of different documents, and comprehensive data analysis, such as finding adversarial drug effects based on patient data. A vocabulary allows the comparability using standardized terms. An ontology can also represent a vocabulary, whereas concepts, relationships, and logical constraints additionally define an ontology. The annotation process is applicable in different domains. Nevertheless, there is a difference between generic and specialized domains according to the annotation process. This thesis emphasizes the differences between the domains and addresses the identified challenges. The majority of annotation approaches focuses on the evaluation of general domains, such as Wikipedia. This thesis evaluates the developed annotation approaches with case report forms that are medical documents for examining clinical trials. The natural language provides different challenges, such as similar meanings using different phrases. The proposed annotation method, AnnoMap, considers the fuzziness of natural language. A further challenge is the reuse of verified annotations. Existing annotations represent knowledge that can be reused for further annotation processes. AnnoMap consists of a reuse strategy that utilizes verified annotations to link new documents to appropriate concepts. Due to the broad spectrum of areas in the biomedical domain, different tools exist. The tools perform differently regarding a particular domain. This thesis proposes a combination approach to unify results from different tools. The method utilizes existing tool results to build a classification model that can classify new annotations as correct or incorrect. The results show that the reuse and the machine learning-based combination improve the annotation quality compared to existing approaches focussing on the biomedical domain. A further part of data integration is entity resolution to build unified knowledge bases from different data sources. A data source consists of a set of records characterized by attributes. The goal of entity resolution is to identify records representing the same real-world entity. Many methods focus on linking data sources consisting of records being characterized by attributes. Nevertheless, only a few methods can handle graph-structured knowledge bases or consider temporal aspects. The temporal aspects are essential to identify the same entities over different time intervals since these aspects underlie certain conditions. Moreover, records can be related to other records so that a small graph structure exists for each record. These small graphs can be linked to each other if they represent the same. This thesis proposes an entity resolution approach for census data consisting of person records for different time intervals. The approach also considers the graph structure of persons given by family relationships. For achieving qualitative results, current methods apply machine-learning techniques to classify record pairs as the same entity. The classification task used a model that is generated by training data. In this case, the training data is a set of record pairs that are labeled as a duplicate or not. Nevertheless, the generation of training data is a time-consuming task so that active learning techniques are relevant for reducing the number of training examples. The entity resolution method for temporal graph-structured data shows an improvement compared to previous collective entity resolution approaches. The developed active learning approach achieves comparable results to supervised learning methods and outperforms other limited budget active learning methods. Besides the entity resolution approach, the thesis introduces the concept of evolution operators for communities. These operators can express the dynamics of communities and individuals. For instance, we can formulate that two communities merged or split over time. Moreover, the operators allow observing the history of individuals. Overall, the presented annotation approaches generate qualitative annotations for medical forms. The annotations enable comprehensive analysis across different data sources as well as accurate queries. The proposed entity resolution approaches improve existing ones so that they contribute to the generation of qualitative knowledge graphs and data analysis tasks

    A Framework for Fully Integrating Environmental Assessment

    Get PDF

    2011 Strategic roadmap for Australian research infrastructure

    Get PDF
    The 2011 Roadmap articulates the priority research infrastructure areas of a national scale (capability areas) to further develop Australia’s research capacity and improve innovation and research outcomes over the next five to ten years. The capability areas have been identified through considered analysis of input provided by stakeholders, in conjunction with specialist advice from Expert Working Groups   It is intended the Strategic Framework will provide a high-level policy framework, which will include principles to guide the development of policy advice and the design of programs related to the funding of research infrastructure by the Australian Government. Roadmapping has been identified in the Strategic Framework Discussion Paper as the most appropriate prioritisation mechanism for national, collaborative research infrastructure. The strategic identification of Capability areas through a consultative roadmapping process was also validated in the report of the 2010 NCRIS Evaluation. The 2011 Roadmap is primarily concerned with medium to large-scale research infrastructure. However, any landmark infrastructure (typically involving an investment in excess of $100 million over five years from the Australian Government) requirements identified in this process will be noted. NRIC has also developed a ‘Process to identify and prioritise Australian Government landmark research infrastructure investments’ which is currently under consideration by the government as part of broader deliberations relating to research infrastructure. NRIC will have strategic oversight of the development of the 2011 Roadmap as part of its overall policy view of research infrastructure

    Emerging paradigms in Genomics-Based crop improvement

    Get PDF
    Next generation sequencing platforms and high-throughput genotyping assays have remarkably expedited the pace of development of genomic tools and resources for several crops. Complementing the technological developments, conceptual shifts have also been witnessed in designing experimental populations. Availability of second generation mapping populations encompassing multiple alleles, multiple traits, and extensive recombination events is radically changing the phenomenon of classical QTL mapping. Additionally, the rising molecular breeding approaches like marker assisted recurrent selection (MARS) that are able to harness several QTLs are of particular importance in obtaining a “designed” genotype carrying the most desirable combinations of favourable alleles. Furthermore, rapid generation of genome-wide marker data coupled with easy access to precise and accurate phenotypic screens enable large-scale exploitation of LD not only to discover novel QTLs via whole genome association scans but also to practise genomic estimated breeding value (GEBV)-based selection of genotypes. Given refinements being experienced in analytical methods and software tools, the multiparent populations will be the resource of choice to undertake genome wide association studies (GWAS), multiparent MARS, and genomic selection (GS). With this, it is envisioned that these high-throughput and high-power molecular breeding methods would greatly assist in exploiting the enormous potential underlying breeding by design approach to facilitate accelerated crop improvement

    Identifying and appraising promising sources of UK clinical, health and social care data for use by NICE

    Get PDF
    This report aimed to aid the National Institute of Health and Care Excellence (NICE) in identifying opportunities for greater use of real-world data within its work. NICE identified five key ways in which real-world data was currently informing its work, or could do so in the future through: (i) researching the effectiveness of interventions or practice in real-world (UK) settings (ii) auditing the implementation of guidance (iii) providing information on resource use and evaluating the potential impact of guidance (iv) providing epidemiological information (v) providing information on current practice to inform the development of NICE quality standards. This report took a broad definition of ‘real-world’ data and created a map of UK sources, informed by a number of experts in real-world data, as well as a literature search, to highlight where some of the opportunities may lie for NICE within its clinical, public health and social care remit. The report was commissioned by the NICE, although the findings are likely to be of wider interest to a range of stakeholders interested in the role of real-world data in informing clinical, social care and public health decision-making. Most of the issues raised surrounding the use and appraisal of real-world data are likely to be generic, although the choice of datasets that were profiled in-depth reflected the interests of NICE. We discovered 275 sources that were named as real-world data sources for clinical, social care or public health investigation, 233 of which were deemed as active. The real-world data landscape therefore is highly complex and heterogeneous and composed of sources with different purposes, structures and collection methods. Some real-world data sources are purposefully either set-up or re-developed to enhance their data linkages and to examine the presence/absence/effectiveness of integrated patient care; however, such sources are in the minority. Furthermore, the small number of real-world data sources that are designed to enable the monitoring of care across providers, or at least have the capability to do so at a national level, have been utilised infrequently for this purpose in the literature. Data that offer the capacity to monitor transitions between health and social care do not currently exist at a national level, despite the increasing recognition of the interdependency between these sectors. Among the data sources we included, it was clear that no one data source represented a panacea for NICE’s real world data needs. This does highlight the merits and importance of data linkage projects and is suggestive of a need to triangulate evidence across different data, particularly in order to understand the feasibility and impact of guidance. There exists no overall catalogue or repository of real-world data sources for health, public health and social care, and previous initiatives aimed at creating such a resource have not been maintained. As much as there is a need for enhanced usage of the data, there is also a need for taking stock, integration, standardisation, and quality assurance of different sources. This research highlights a need for a systematic approach to creating an inventory of sources with detailed metadata and the funding to maintain this resource. This would represent an essential first step to support future initiatives aimed at enhancing the use of real-world data
    • 

    corecore