10 research outputs found

    ΠžΠΊΡ€ΡƒΠΆΠ΅ΡšΠ΅ Π·Π° Π°Π½Π°Π»ΠΈΠ·Ρƒ ΠΈ ΠΎΡ†Π΅Π½Ρƒ ΠΊΠ²Π°Π»ΠΈΡ‚Π΅Ρ‚Π° Π²Π΅Π»ΠΈΠΊΠΈΡ… ΠΈ ΠΏΠΎΠ²Π΅Π·Π°Π½ΠΈΡ… ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ°

    Get PDF
    Linking and publishing data in the Linked Open Data format increases the interoperability and discoverability of resources over the Web. To accomplish this, the process comprises several design decisions, based on the Linked Data principles that, on one hand, recommend to use standards for the representation and the access to data on the Web, and on the other hand to set hyperlinks between data from different sources. Despite the efforts of the World Wide Web Consortium (W3C), being the main international standards organization for the World Wide Web, there is no one tailored formula for publishing data as Linked Data. In addition, the quality of the published Linked Open Data (LOD) is a fundamental issue, and it is yet to be thoroughly managed and considered. In this doctoral thesis, the main objective is to design and implement a novel framework for selecting, analyzing, converting, interlinking, and publishing data from diverse sources, simultaneously paying great attention to quality assessment throughout all steps and modules of the framework. The goal is to examine whether and to what extent are the Semantic Web technologies applicable for merging data from different sources and enabling end-users to obtain additional information that was not available in individual datasets, in addition to the integration into the Semantic Web community space. Additionally, the Ph.D. thesis intends to validate the applicability of the process in the specific and demanding use case, i.e. for creating and publishing an Arabic Linked Drug Dataset, based on open drug datasets from selected Arabic countries and to discuss the quality issues observed in the linked data life-cycle. To that end, in this doctoral thesis, a Semantic Data Lake was established in the pharmaceutical domain that allows further integration and developing different business services on top of the integrated data sources. Through data representation in an open machine-readable format, the approach offers an optimum solution for information and data dissemination for building domain-specific applications, and to enrich and gain value from the original dataset. This thesis showcases how the pharmaceutical domain benefits from the evolving research trends for building competitive advantages. However, as it is elaborated in this thesis, a better understanding of the specifics of the Arabic language is required to extend linked data technologies utilization in targeted Arabic organizations.ПовСзивањС ΠΈ ΠΎΠ±Ρ˜Π°Π²Ρ™ΠΈΠ²Π°ΡšΠ΅ ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ "ПовСзани ΠΎΡ‚Π²ΠΎΡ€Π΅Π½ΠΈ ΠΏΠΎΠ΄Π°Ρ†ΠΈ" (Π΅Π½Π³. Linked Open Data) ΠΏΠΎΠ²Π΅Ρ›Π°Π²Π° интСропСрабилност ΠΈ могућности Π·Π° ΠΏΡ€Π΅Ρ‚Ρ€Π°ΠΆΠΈΠ²Π°ΡšΠ΅ рСсурса ΠΏΡ€Π΅ΠΊΠΎ Web-Π°. ΠŸΡ€ΠΎΡ†Π΅Ρ јС заснован Π½Π° Linked Data ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈΠΌΠ° (W3C, 2006) који са јСднС странС Π΅Π»Π°Π±ΠΎΡ€ΠΈΡ€Π° стандардС Π·Π° ΠΏΡ€Π΅Π΄ΡΡ‚Π°Π²Ρ™Π°ΡšΠ΅ ΠΈ приступ ΠΏΠΎΠ΄Π°Ρ†ΠΈΠΌΠ° Π½Π° WΠ΅Π±Ρƒ (RDF, OWL, SPARQL), Π° са Π΄Ρ€ΡƒΠ³Π΅ странС, ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ ΡΡƒΠ³Π΅Ρ€ΠΈΡˆΡƒ ΠΊΠΎΡ€ΠΈΡˆΡ›Π΅ΡšΠ΅ Ρ…ΠΈΠΏΠ΅Ρ€Π²Π΅Π·Π° ΠΈΠ·ΠΌΠ΅Ρ’Ρƒ ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° ΠΈΠ· Ρ€Π°Π·Π»ΠΈΡ‡ΠΈΡ‚ΠΈΡ… ΠΈΠ·Π²ΠΎΡ€Π°. Упркос Π½Π°ΠΏΠΎΡ€ΠΈΠΌΠ° W3C ΠΊΠΎΠ½Π·ΠΎΡ€Ρ†ΠΈΡ˜ΡƒΠΌΠ° (W3C јС Π³Π»Π°Π²Π½Π° ΠΌΠ΅Ρ’ΡƒΠ½Π°Ρ€ΠΎΠ΄Π½Π° ΠΎΡ€Π³Π°Π½ΠΈΠ·Π°Ρ†ΠΈΡ˜Π° Π·Π° стандардС Π·Π° Web-Ρƒ), Π½Π΅ ΠΏΠΎΡΡ‚ΠΎΡ˜ΠΈ Ρ˜Π΅Π΄ΠΈΠ½ΡΡ‚Π²Π΅Π½Π° Ρ„ΠΎΡ€ΠΌΡƒΠ»Π° Π·Π° ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½Ρ‚Π°Ρ†ΠΈΡ˜Ρƒ процСса ΠΎΠ±Ρ˜Π°Π²Ρ™ΠΈΠ²Π°ΡšΠ΅ ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° Ρƒ Linked Data Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ. Π£Π·ΠΈΠΌΠ°Ρ˜ΡƒΡ›ΠΈ Ρƒ ΠΎΠ±Π·ΠΈΡ€ Π΄Π° јС ΠΊΠ²Π°Π»ΠΈΡ‚Π΅Ρ‚ ΠΎΠ±Ρ˜Π°Π²Ρ™Π΅Π½ΠΈΡ… ΠΏΠΎΠ²Π΅Π·Π°Π½ΠΈΡ… ΠΎΡ‚Π²ΠΎΡ€Π΅Π½ΠΈΡ… ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° ΠΎΠ΄Π»ΡƒΡ‡ΡƒΡ˜ΡƒΡ›ΠΈ Π·Π° Π±ΡƒΠ΄ΡƒΡ›ΠΈ Ρ€Π°Π·Π²ΠΎΡ˜ Web-Π°, Ρƒ овој Π΄ΠΎΠΊΡ‚ΠΎΡ€ΡΠΊΠΎΡ˜ Π΄ΠΈΡΠ΅Ρ€Ρ‚Π°Ρ†ΠΈΡ˜ΠΈ, Π³Π»Π°Π²Π½ΠΈ Ρ†ΠΈΡ™ јС (1) дизајн ΠΈ ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½Ρ‚Π°Ρ†ΠΈΡ˜Π° ΠΈΠ½ΠΎΠ²Π°Ρ‚ΠΈΠ²Π½ΠΎΠ³ ΠΎΠΊΠ²ΠΈΡ€Π° Π·Π° ΠΈΠ·Π±ΠΎΡ€, Π°Π½Π°Π»ΠΈΠ·Ρƒ, ΠΊΠΎΠ½Π²Π΅Ρ€Π·ΠΈΡ˜Ρƒ, мСђусобно повСзивањС ΠΈ ΠΎΠ±Ρ˜Π°Π²Ρ™ΠΈΠ²Π°ΡšΠ΅ ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° ΠΈΠ· Ρ€Π°Π·Π»ΠΈΡ‡ΠΈΡ‚ΠΈΡ… ΠΈΠ·Π²ΠΎΡ€Π° ΠΈ (2) Π°Π½Π°Π»ΠΈΠ·Π° ΠΏΡ€ΠΈΠΌΠ΅Π½Π° ΠΎΠ²ΠΎΠ³ приступа Ρƒ Ρ„Π°Ρ€ΠΌΠ°Ρ†eутском Π΄ΠΎΠΌΠ΅Π½Ρƒ. ΠŸΡ€Π΅Π΄Π»ΠΎΠΆΠ΅Π½Π° докторска Π΄ΠΈΡΠ΅Ρ€Ρ‚Π°Ρ†ΠΈΡ˜Π° Π΄Π΅Ρ‚Π°Ρ™Π½ΠΎ ΠΈΡΡ‚Ρ€Π°ΠΆΡƒΡ˜Π΅ ΠΏΠΈΡ‚Π°ΡšΠ΅ ΠΊΠ²Π°Π»ΠΈΡ‚Π΅Ρ‚Π° Π²Π΅Π»ΠΈΠΊΠΈΡ… ΠΈ ΠΏΠΎΠ²Π΅Π·Π°Π½ΠΈΡ… СкосистСма ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° (Π΅Π½Π³. Linked Data Ecosystems), ΡƒΠ·ΠΈΠΌΠ°Ρ˜ΡƒΡ›ΠΈ Ρƒ ΠΎΠ±Π·ΠΈΡ€ могућност ΠΏΠΎΠ½ΠΎΠ²Π½ΠΎΠ³ ΠΊΠΎΡ€ΠΈΡˆΡ›Π΅ΡšΠ° ΠΎΡ‚Π²ΠΎΡ€Π΅Π½ΠΈΡ… ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ°. Π Π°Π΄ јС мотивисан ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΎΠΌ Π΄Π° сС ΠΎΠΌΠΎΠ³ΡƒΡ›ΠΈ истраТивачима ΠΈΠ· арапских Π·Π΅ΠΌΠ°Ρ™Π° Π΄Π° ΡƒΠΏΠΎΡ‚Ρ€Π΅Π±ΠΎΠΌ сСмантичких Π²Π΅Π± Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΡ˜Π° ΠΏΠΎΠ²Π΅ΠΆΡƒ својС ΠΏΠΎΠ΄Π°Ρ‚ΠΊΠ΅ са ΠΎΡ‚Π²ΠΎΡ€Π΅Π½ΠΈΠΌ ΠΏΠΎΠ΄Π°Ρ†ΠΈΠΌΠ°, ΠΊΠ°ΠΎ Π½ΠΏΡ€. DBpedia-јом. Π¦ΠΈΡ™ јС Π΄Π° сС испита Π΄Π° Π»ΠΈ ΠΎΡ‚Π²ΠΎΡ€Π΅Π½ΠΈ ΠΏΠΎΠ΄Π°Ρ†ΠΈ ΠΈΠ· Арапских Π·Π΅ΠΌΠ°Ρ™Π° ΠΎΠΌΠΎΠ³ΡƒΡ›Π°Π²Π°Ρ˜Ρƒ ΠΊΡ€Π°Ρ˜ΡšΠΈΠΌ корисницима Π΄Π° Π΄ΠΎΠ±ΠΈΡ˜Ρƒ Π΄ΠΎΠ΄Π°Ρ‚Π½Π΅ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡ˜Π΅ којС нису доступнС Ρƒ ΠΏΠΎΡ˜Π΅Π΄ΠΈΠ½Π°Ρ‡Π½ΠΈΠΌ скуповима ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ°, ΠΏΠΎΡ€Π΅Π΄ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Ρ†ΠΈΡ˜Π΅ Ρƒ сСмантички WΠ΅Π± простор. Докторска Π΄ΠΈΡΠ΅Ρ€Ρ‚Π°Ρ†ΠΈΡ˜Π° ΠΏΡ€Π΅Π΄Π»Π°ΠΆΠ΅ ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΡ˜Ρƒ Π·Π° Ρ€Π°Π·Π²ΠΎΡ˜ Π°ΠΏΠ»ΠΈΠΊΠ°Ρ†ΠΈΡ˜Π΅ Π·Π° Ρ€Π°Π΄ са ΠΏΠΎΠ²Π΅Π·Π°Π½ΠΈΠΌ (Linked) ΠΏΠΎΠ΄Π°Ρ†ΠΈΠΌΠ° ΠΈ ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½Ρ‚ΠΈΡ€Π° софтвСрско Ρ€Π΅ΡˆΠ΅ΡšΠ΅ којС ΠΎΠΌΠΎΠ³ΡƒΡ›ΡƒΡ˜Π΅ ΠΏΡ€Π΅Ρ‚Ρ€Π°ΠΆΠΈΠ²Π°ΡšΠ΅ консолидованог скупа ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° ΠΎ Π»Π΅ΠΊΠΎΠ²ΠΈΠΌΠ° ΠΈΠ· ΠΈΠ·Π°Π±Ρ€Π°Π½ΠΈΡ… арапских Π·Π΅ΠΌΠ°Ρ™Π°. Консолидовани скуп ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° јС ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½Ρ‚ΠΈΡ€Π°Π½ Ρƒ ΠΎΠ±Π»ΠΈΠΊΡƒ Π‘Π΅ΠΌΠ°Π½Ρ‚ΠΈΡ‡ΠΊΠΎΠ³ Ρ˜Π΅Π·Π΅Ρ€Π° ΠΏΠΎΠ΄Π°Ρ‚Π°ΠΊΠ° (Π΅Π½Π³. Semantic Data Lake). Ова Ρ‚Π΅Π·Π° ΠΏΠΎΠΊΠ°Π·ΡƒΡ˜Π΅ ΠΊΠ°ΠΊΠΎ фармацСутска ΠΈΠ½Π΄ΡƒΡΡ‚Ρ€ΠΈΡ˜Π° ΠΈΠΌΠ° користи ΠΎΠ΄ ΠΏΡ€ΠΈΠΌΠ΅Π½Π΅ ΠΈΠ½ΠΎΠ²Π°Ρ‚ΠΈΠ²Π½ΠΈΡ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΡ˜Π° ΠΈ истраТивачких Ρ‚Ρ€Π΅Π½Π΄ΠΎΠ²Π° ΠΈΠ· области сСмантичких Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΡ˜Π°. ΠœΠ΅Ρ’ΡƒΡ‚ΠΈΠΌ, ΠΊΠ°ΠΊΠΎ јС Π΅Π»Π°Π±ΠΎΡ€ΠΈΡ€Π°Π½ΠΎ Ρƒ овој Ρ‚Π΅Π·ΠΈ, ΠΏΠΎΡ‚Ρ€Π΅Π±Π½ΠΎ јС Π±ΠΎΡ™Π΅ Ρ€Π°Π·ΡƒΠΌΠ΅Π²Π°ΡšΠ΅ спСцифичности арапског јСзика Π·Π° ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½Ρ‚Π°Ρ†ΠΈΡ˜Ρƒ Linked Data Π°Π»Π°Ρ‚Π° ΠΈ ΡšΡƒΡ…ΠΎΠ²Ρƒ ΠΏΡ€ΠΈΠΌΠ΅Π½Ρƒ са ΠΏΠΎΠ΄Π°Ρ†ΠΈΠΌΠ° ΠΈΠ· Арапских Π·Π΅ΠΌΠ°Ρ™Π°

    Linked Data Quality Assessment and its Application to Societal Progress Measurement

    Get PDF
    In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to οΏΌοΏΌmeasure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis

    Methods for Matching of Linked Open Social Science Data

    Get PDF
    In recent years, the concept of Linked Open Data (LOD), has gained popularity and acceptance across various communities and domains. Science politics and organizations claim that the potential of semantic technologies and data exposed in this manner may support and enhance research processes and infrastructures providing research information and services. In this thesis, we investigate whether these expectations can be met in the domain of the social sciences. In particular, we analyse and develop methods for matching social scientific data that is published as Linked Data, which we introduce as Linked Open Social Science Data. Based on expert interviews and a prototype application, we investigate the current consumption of LOD in the social sciences and its requirements. Following these insights, we first focus on the complete publication of Linked Open Social Science Data by extending and developing domain-specific ontologies for representing research communities, research data and thesauri. In the second part, methods for matching Linked Open Social Science Data are developed that address particular patterns and characteristics of the data typically used in social research. The results of this work contribute towards enabling a meaningful application of Linked Data in a scientific domain

    Dynamic enhancement of drug product labels to support drug safety, efficacy, and effectiveness

    Get PDF
    Out-of-date or incomplete drug product labeling information may increase the risk of otherwise preventable adverse drug events. In recognition of these concerns, the United States Federal Drug Administration (FDA) requires drug product labels to include specific information. Unfortunately, several studies have found that drug product labeling fails to keep current with the scientific literature. We present a novel approach to addressing this issue. The primary goal of this novel approach is to better meet the information needs of persons who consult the drug product label for information on a drug's efficacy, effectiveness, and safety. Using FDA product label regulations as a guide, the approach links drug claims present in drug information sources available on the Semantic Web with specific product label sections. Here we report on pilot work that establishes the baseline performance characteristics of a proof-of-concept system implementing the novel approach. Claims from three drug information sources were linked to the Clinical Studies, Drug Interactions, and Clinical Pharmacology sections of the labels for drug products that contain one of 29 psychotropic drugs. The resulting Linked Data set maps 409 efficacy/effectiveness study results, 784 drug-drug interactions, and 112 metabolic pathway assertions derived from three clinically-oriented drug information sources (ClinicalTrials.gov, the National Drug File - Reference Terminology, and the Drug Interaction Knowledge Base) to the sections of 1,102 product labels. Proof-of-concept web pages were created for all 1,102 drug product labels that demonstrate one possible approach to presenting information that dynamically enhances drug product labeling. We found that approximately one in five efficacy/effectiveness claims were relevant to the Clinical Studies section of a psychotropic drug product, with most relevant claims providing new information. We also identified several cases where all of the drug-drug interaction claims linked to the Drug Interactions section for a drug were potentially novel. The baseline performance characteristics of the proof-of-concept will enable further technical and user-centered research on robust methods for scaling the approach to the many thousands of product labels currently on the market

    Dynamic enhancement of drug product labels to support drug safety, efficacy, and effectiveness

    Get PDF
    Out-of-date or incomplete drug product labeling information may increase the risk of otherwise preventable adverse drug events. In recognition of these concerns, the United States Federal Drug Administration (FDA) requires drug product labels to include specific information. Unfortunately, several studies have found that drug product labeling fails to keep current with the scientific literature. We present a novel approach to addressing this issue. The primary goal of this novel approach is to better meet the information needs of persons who consult the drug product label for information on a drug’s efficacy, effectiveness, and safety. Using FDA product label regulations as a guide, the approach links drug claims present in drug information sources available on the Semantic Web with specific product label sections. Here we report on pilot work that establishes the baseline performance characteristics of a proof-of-concept system implementing the novel approach. Claims from three drug information sources were linked to the Clinical Studies, Drug Interactions, and Clinical Pharmacology sections of the labels for drug products that contain one of 29 psychotropic drugs. The resulting Linked Data set maps 409 efficacy/effectiveness study results, 784 drug-drug interactions, and 112 metabolic pathway assertions derived from three clinically-oriented drug information sources (ClinicalTrials.gov, the National Drug File – Reference Terminology, and the Drug Interaction Knowledge Base) to the sections of 1,102 product labels. Proof-of-concept web pages were created for all 1,102 drug product labels that demonstrate one possible approach to presenting information that dynamically enhances drug product labeling. We found that approximately one in five efficacy/effectiveness claims were relevant to the Clinical Studies section of a psychotropic drug product, with most relevant claims providing new information. We also identified several cases where all of the drug-drug interaction claims linked to the Drug Interactions section for a drug were potentially novel. The baseline performance characteristics of the proof-of-concept will enable further technical and user-centered research on robust methods for scaling the approach to the many thousands of product labels currently on the market

    Dynamic enhancement of drug product labels to support drug safety, efficacy, and effectiveness

    Full text link

    From Text to Knowledge

    Get PDF
    The global information space provided by the World Wide Web has changed dramatically the way knowledge is shared all over the world. To make this unbelievable huge information space accessible, search engines index the uploaded contents and provide efficient algorithmic machinery for ranking the importance of documents with respect to an input query. All major search engines such as Google, Yahoo or Bing are keyword-based, which is indisputable a very powerful tool for accessing information needs centered around documents. However, this unstructured, document-oriented paradigm of the World Wide Web has serious drawbacks, when searching for specific knowledge about real-world entities. When asking for advanced facts about entities, today's search engines are not very good in providing accurate answers. Hand-built knowledge bases such as Wikipedia or its structured counterpart DBpedia are excellent sources that provide common facts. However, these knowledge bases are far from being complete and most of the knowledge lies still buried in unstructured documents. Statistical machine learning methods have the great potential to help to bridge the gap between text and knowledge by (semi-)automatically transforming the unstructured representation of the today's World Wide Web to a more structured representation. This thesis is devoted to reduce this gap with Probabilistic Graphical Models. Probabilistic Graphical Models play a crucial role in modern pattern recognition as they merge two important fields of applied mathematics: Graph Theory and Probability Theory. The first part of the thesis will present a novel system called Text2SemRel that is able to (semi-)automatically construct knowledge bases from textual document collections. The resulting knowledge base consists of facts centered around entities and their relations. Essential part of the system is a novel algorithm for extracting relations between entity mentions that is based on Conditional Random Fields, which are Undirected Probabilistic Graphical Models. In the second part of the thesis, we will use the power of Directed Probabilistic Graphical Models to solve important knowledge discovery tasks in semantically annotated large document collections. In particular, we present extensions of the Latent Dirichlet Allocation framework that are able to learn in an unsupervised way the statistical semantic dependencies between unstructured representations such as documents and their semantic annotations. Semantic annotations of documents might refer to concepts originating from a thesaurus or ontology but also to user-generated informal tags in social tagging systems. These forms of annotations represent a first step towards the conversion to a more structured form of the World Wide Web. In the last part of the thesis, we prove the large-scale applicability of the proposed fact extraction system Text2SemRel. In particular, we extract semantic relations between genes and diseases from a large biomedical textual repository. The resulting knowledge base contains far more potential disease genes exceeding the number of disease genes that are currently stored in curated databases. Thus, the proposed system is able to unlock knowledge currently buried in the literature. The literature-derived human gene-disease network is subject of further analysis with respect to existing curated state of the art databases. We analyze the derived knowledge base quantitatively by comparing it with several curated databases with regard to size of the databases and properties of known disease genes among other things. Our experimental analysis shows that the facts extracted from the literature are of high quality

    Knowledge Management approaches to model pathophysiological mechanisms and discover drug targets in Multiple Sclerosis

    Get PDF
    Multiple Sclerosis (MS) is one of the most prevalent neurodegenerative diseases for which a cure is not yet available. MS is a complex disease for numerous reasons; its etiology is unknown, the diagnosis is not exclusive, the disease course is unpredictable and therapeutic response varies from patient to patient. There are four established subtypes of MS, which are segregated based on different characteristics. Many environmental and genetic factors are considered to play a role in MS etiology, including viral infection, vitamin D deficiency, epigenetical changes and some genes. Despite the large body of diverse scientific knowledge, from laboratory findings to clinical trials, no integrated model which portrays the underlying mechanisms of the disease state of MS is available. Contemporary therapies only provide reduction in the severity of the disease, and there is an unmet need of efficient drugs. The present thesis provides a knowledge-based rationale to model MS disease mechanisms and identify potential drug candidates by using systems biology approaches. Systems biology is an emerging field which utilizes the computational methods to integrate datasets of various granularities and simulate the disease outcome. It provides a framework to model molecular dynamics with their precise interaction and contextual details. The proposed approaches were used to extract knowledge from literature by state of the art text mining technologies, integrate it with proprietary data using semantic platforms, and build different models (molecular interactions map, agent based models to simulate disease outcome, and MS disease progression model with respect to time). For better information representation, disease ontology was also developed and a methodology of automatic enrichment was derived. The models provide an insight into the disease, and several pathways were explored by combining the therapeutics and the disease-specific prescriptions. The approaches and models developed in this work resulted in the identification of novel drug candidates that are backed up by existing experimental and clinical knowledge

    Federated Query Processing over Heterogeneous Data Sources in a Semantic Data Lake

    Get PDF
    Data provides the basis for emerging scientific and interdisciplinary data-centric applications with the potential of improving the quality of life for citizens. Big Data plays an important role in promoting both manufacturing and scientific development through industrial digitization and emerging interdisciplinary research. Open data initiatives have encouraged the publication of Big Data by exploiting the decentralized nature of the Web, allowing for the availability of heterogeneous data generated and maintained by autonomous data providers. Consequently, the growing volume of data consumed by different applications raise the need for effective data integration approaches able to process a large volume of data that is represented in different format, schema and model, which may also include sensitive data, e.g., financial transactions, medical procedures, or personal data. Data Lakes are composed of heterogeneous data sources in their original format, that reduce the overhead of materialized data integration. Query processing over Data Lakes require the semantic description of data collected from heterogeneous data sources. A Data Lake with such semantic annotations is referred to as a Semantic Data Lake. Transforming Big Data into actionable knowledge demands novel and scalable techniques for enabling not only Big Data ingestion and curation to the Semantic Data Lake, but also for efficient large-scale semantic data integration, exploration, and discovery. Federated query processing techniques utilize source descriptions to find relevant data sources and find efficient execution plan that minimize the total execution time and maximize the completeness of answers. Existing federated query processing engines employ a coarse-grained description model where the semantics encoded in data sources are ignored. Such descriptions may lead to the erroneous selection of data sources for a query and unnecessary retrieval of data, affecting thus the performance of query processing engine. In this thesis, we address the problem of federated query processing against heterogeneous data sources in a Semantic Data Lake. First, we tackle the challenge of knowledge representation and propose a novel source description model, RDF Molecule Templates, that describe knowledge available in a Semantic Data Lake. RDF Molecule Templates (RDF-MTs) describes data sources in terms of an abstract description of entities belonging to the same semantic concept. Then, we propose a technique for data source selection and query decomposition, the MULDER approach, and query planning and optimization techniques, Ontario, that exploit the characteristics of heterogeneous data sources described using RDF-MTs and provide a uniform access to heterogeneous data sources. We then address the challenge of enforcing privacy and access control requirements imposed by data providers. We introduce a privacy-aware federated query technique, BOUNCER, able to enforce privacy and access control regulations during query processing over data sources in a Semantic Data Lake. In particular, BOUNCER exploits RDF-MTs based source descriptions in order to express privacy and access control policies as well as their automatic enforcement during source selection, query decomposition, and planning. Furthermore, BOUNCER implements query decomposition and optimization techniques able to identify query plans over data sources that not only contain the relevant entities to answer a query, but also are regulated by policies that allow for accessing these relevant entities. Finally, we tackle the problem of interest based update propagation and co-evolution of data sources. We present a novel approach for interest-based RDF update propagation that consistently maintains a full or partial replication of large datasets and deal with co-evolution
    corecore