19 research outputs found

    The Extent and Coverage of Current Knowledge of Connected Health: Systematic Mapping Study

    Get PDF
    Background: This paper examines the development of the Connected Health research landscape with a view on providing a historical perspective on existing Connected Health research. Connected Health has become a rapidly growing research field as our healthcare system is facing pressured to become more proactive and patient centred. Objective: We aimed to identify the extent and coverage of the current body of knowledge in Connected Health. With this, we want to identify which topics have drawn the attention of Connected health researchers, and if there are gaps or interdisciplinary opportunities for further research. Methods: We used a systematic mapping study that combines scientific contributions from research on medicine, business, computer science and engineering. We analyse the papers with seven classification criteria, publication source, publication year, research types, empirical types, contribution types research topic and the condition studied in the paper. Results: Altogether, our search resulted in 208 papers which were analysed by a multidisciplinary group of researchers. Our results indicate a slow start for Connected Health research but a more recent steady upswing since 2013. The majority of papers proposed healthcare solutions (37%) or evaluated Connected Health approaches (23%). Case studies (28%) and experiments (26%) were the most popular forms of scientific validation employed. Diabetes, cancer, multiple sclerosis, and heart conditions are among the most prevalent conditions studied. Conclusions: We conclude that Connected Health research seems to be an established field of research, which has been growing strongly during the last five years. There seems to be more focus on technology driven research with a strong contribution from medicine, but business aspects of Connected health are not as much studied

    Estimating Marginal Healthcare Costs Using Genetic Variants as Instrumental Variables: Mendelian Randomization in Economic Evaluation

    Get PDF
    Accurate measurement of the marginal healthcare costs associated with different diseases and health conditions is important, especially for increasingly prevalent conditions such as obesity. However, existing observational study designs cannot identify the causal impact of disease on healthcare costs. This paper explores the possibilities for causal inference offered by Mendelian Randomization, a form of instrumental variable analysis that uses genetic variation as a proxy for modifiable risk exposures, to estimate the effect of health conditions on cost. Well-conducted genome-wide association studies provide robust evidence of the associations of genetic variants with health conditions or disease risk factors. The subsequent causal effects of these health conditions on cost can be estimated by using genetic variants as instruments for the health conditions. This is because the approximately random allocation of genotypes at conception means that many genetic variants are orthogonal to observable and unobservable confounders. Datasets with linked genotypic and resource use information obtained from electronic medical records or from routinely collected administrative data are now becoming available, and will facilitate this form of analysis. We describe some of the methodological issues that arise in this type of analysis, which we illustrate by considering how Mendelian Randomization could be used to estimate the causal impact of obesity, a complex trait, on healthcare costs. We describe some of the data sources that could be used for this type of analysis. We conclude by considering the challenges and opportunities offered by Mendelian Randomization for economic evaluation

    Information retrieval and text mining technologies for chemistry

    Get PDF
    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.A.V. and M.K. acknowledge funding from the European Community’s Horizon 2020 Program (project reference: 654021 - OpenMinted). M.K. additionally acknowledges the Encomienda MINETAD-CNIO as part of the Plan for the Advancement of Language Technology. O.R. and J.O. thank the Foundation for Applied Medical Research (FIMA), University of Navarra (Pamplona, Spain). This work was partially funded by Consellería de Cultura, Educación e Ordenación Universitaria (Xunta de Galicia), and FEDER (European Union), and the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE 2020 (POCI-01-0145-FEDER-006684). We thank Iñigo Garciá -Yoldi for useful feedback and discussions during the preparation of the manuscript.info:eu-repo/semantics/publishedVersio

    Towards principles of ontology-based annotation of clinical narratives

    No full text
    Despite the increasing availability of ontology-based semantic resources for biomedical contentrepresentation, large amounts of clinical data are in narrative form only. Therefore, many clinicalinformation management tasks require to unlock this information using natural language processing (NLP).Clinical corpora annotated by humans are crucial resources. On the one hand, they are needed to trainand domain-fine-tune language models with the goal to transform information from unstructured freetext into an interoperable form. On the other hand, manually annotated corpora are indispensable forassessing the results of information extraction using NLP. Annotation quality is crucial. Therefore, detailedannotation guidelines are needed to define the form that extracted information should take, to preventhuman annotators from making erratic annotation decisions and to guarantee a good inter-annotatoragreement. Our hypothesis is that, to this end, human annotations (and subsequently machine annotationslearned from human annotations) should (i) be based on ontological principles, and (ii) be consistentwith existing clinical documentation standards. With the experience of several annotation projects, wehighlight the need for sophisticated guidelines. We formulate a set of abstract principles on which suchguidelines should be based, followed by examples of how to keep them, on the one hand, user-friendly andconsistent, and on the other hand compatible with the international semantic standards SNOMED CT andFHIR, including their areas of overlap. We sketch the representation of the resulting representations in aknowledge graph as a state-of-the-art semantic representation paradigm, which can be enriched by addi-tional content on A-Box and T-Box levels and on which symbolic and neural reasoning tasks can be applied

    SEMCARE: Multilingual Semantic Search in Semi-Structured Clinical Data.

    No full text
    The vast amount of clinical data in electronic health records constitutes a great potential for secondary use. However, most of this content consists of unstructured or semi-structured texts, which is difficult to process. Several challenges are still pending: medical language idiosyncrasies in different natural languages, and the large variety of medical terminology systems. In this paper we present SEMCARE, a European initiative designed to minimize these problems by providing a multi-lingual platform (English, German, and Dutch) that allows users to express complex queries and obtain relevant search results from clinical texts. SEMCARE is based on a selection of adapted biomedical terminologies, together with Apache UIMA and Apache Solr as open source state-of-the-art natural language pipeline and indexing technologies. SEMCARE has been deployed and is currently being tested at three medical institutions in the UK, Austria, and the Netherlands, showing promising results in a cardiology use case
    corecore