10 research outputs found

    Knowledge Patterns for the Web: extraction, tranformation and reuse

    Get PDF
    This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization

    From Text to Knowledge

    Get PDF
    The global information space provided by the World Wide Web has changed dramatically the way knowledge is shared all over the world. To make this unbelievable huge information space accessible, search engines index the uploaded contents and provide efficient algorithmic machinery for ranking the importance of documents with respect to an input query. All major search engines such as Google, Yahoo or Bing are keyword-based, which is indisputable a very powerful tool for accessing information needs centered around documents. However, this unstructured, document-oriented paradigm of the World Wide Web has serious drawbacks, when searching for specific knowledge about real-world entities. When asking for advanced facts about entities, today's search engines are not very good in providing accurate answers. Hand-built knowledge bases such as Wikipedia or its structured counterpart DBpedia are excellent sources that provide common facts. However, these knowledge bases are far from being complete and most of the knowledge lies still buried in unstructured documents. Statistical machine learning methods have the great potential to help to bridge the gap between text and knowledge by (semi-)automatically transforming the unstructured representation of the today's World Wide Web to a more structured representation. This thesis is devoted to reduce this gap with Probabilistic Graphical Models. Probabilistic Graphical Models play a crucial role in modern pattern recognition as they merge two important fields of applied mathematics: Graph Theory and Probability Theory. The first part of the thesis will present a novel system called Text2SemRel that is able to (semi-)automatically construct knowledge bases from textual document collections. The resulting knowledge base consists of facts centered around entities and their relations. Essential part of the system is a novel algorithm for extracting relations between entity mentions that is based on Conditional Random Fields, which are Undirected Probabilistic Graphical Models. In the second part of the thesis, we will use the power of Directed Probabilistic Graphical Models to solve important knowledge discovery tasks in semantically annotated large document collections. In particular, we present extensions of the Latent Dirichlet Allocation framework that are able to learn in an unsupervised way the statistical semantic dependencies between unstructured representations such as documents and their semantic annotations. Semantic annotations of documents might refer to concepts originating from a thesaurus or ontology but also to user-generated informal tags in social tagging systems. These forms of annotations represent a first step towards the conversion to a more structured form of the World Wide Web. In the last part of the thesis, we prove the large-scale applicability of the proposed fact extraction system Text2SemRel. In particular, we extract semantic relations between genes and diseases from a large biomedical textual repository. The resulting knowledge base contains far more potential disease genes exceeding the number of disease genes that are currently stored in curated databases. Thus, the proposed system is able to unlock knowledge currently buried in the literature. The literature-derived human gene-disease network is subject of further analysis with respect to existing curated state of the art databases. We analyze the derived knowledge base quantitatively by comparing it with several curated databases with regard to size of the databases and properties of known disease genes among other things. Our experimental analysis shows that the facts extracted from the literature are of high quality

    Extracting and Cleaning RDF Data

    Get PDF
    The RDF data model has become a prevalent format to represent heterogeneous data because of its versatility. The capability of dismantling information from its native formats and representing it in triple format offers a simple yet powerful way of modelling data that is obtained from multiple sources. In addition, the triple format and schema constraints of the RDF model make the RDF data easy to process as labeled, directed graphs. This graph representation of RDF data supports higher-level analytics by enabling querying using different techniques and querying languages, e.g., SPARQL. Anlaytics that require structured data are supported by transforming the graph data on-the-fly to populate the target schema that is needed for downstream analysis. These target schemas are defined by downstream applications according to their information need. The flexibility of RDF data brings two main challenges. First, the extraction of RDF data is a complex task that may involve domain expertise about the information required to be extracted for different applications. Another significant aspect of analyzing RDF data is its quality, which depends on multiple factors including the reliability of data sources and the accuracy of the extraction systems. The quality of the analysis depends mainly on the quality of the underlying data. Therefore, evaluating and improving the quality of RDF data has a direct effect on the correctness of downstream analytics. This work presents multiple approaches related to the extraction and quality evaluation of RDF data. To cope with the large amounts of data that needs to be extracted, we present DSTLR, a scalable framework to extract RDF triples from semi-structured and unstructured data sources. For rare entities that fall on the long tail of information, there may not be enough signals to support high-confidence extraction. Towards this problem, we present an approach to estimate property values for long tail entities. We also present multiple algorithms and approaches that focus on the quality of RDF data. These include discovering quality constraints from RDF data, and utilizing machine learning techniques to repair errors in RDF data

    Окружење за анализу и оцену квалитета великих и повезаних података

    Get PDF
    Linking and publishing data in the Linked Open Data format increases the interoperability and discoverability of resources over the Web. To accomplish this, the process comprises several design decisions, based on the Linked Data principles that, on one hand, recommend to use standards for the representation and the access to data on the Web, and on the other hand to set hyperlinks between data from different sources. Despite the efforts of the World Wide Web Consortium (W3C), being the main international standards organization for the World Wide Web, there is no one tailored formula for publishing data as Linked Data. In addition, the quality of the published Linked Open Data (LOD) is a fundamental issue, and it is yet to be thoroughly managed and considered. In this doctoral thesis, the main objective is to design and implement a novel framework for selecting, analyzing, converting, interlinking, and publishing data from diverse sources, simultaneously paying great attention to quality assessment throughout all steps and modules of the framework. The goal is to examine whether and to what extent are the Semantic Web technologies applicable for merging data from different sources and enabling end-users to obtain additional information that was not available in individual datasets, in addition to the integration into the Semantic Web community space. Additionally, the Ph.D. thesis intends to validate the applicability of the process in the specific and demanding use case, i.e. for creating and publishing an Arabic Linked Drug Dataset, based on open drug datasets from selected Arabic countries and to discuss the quality issues observed in the linked data life-cycle. To that end, in this doctoral thesis, a Semantic Data Lake was established in the pharmaceutical domain that allows further integration and developing different business services on top of the integrated data sources. Through data representation in an open machine-readable format, the approach offers an optimum solution for information and data dissemination for building domain-specific applications, and to enrich and gain value from the original dataset. This thesis showcases how the pharmaceutical domain benefits from the evolving research trends for building competitive advantages. However, as it is elaborated in this thesis, a better understanding of the specifics of the Arabic language is required to extend linked data technologies utilization in targeted Arabic organizations.Повезивање и објављивање података у формату "Повезани отворени подаци" (енг. Linked Open Data) повећава интероперабилност и могућности за претраживање ресурса преко Web-а. Процес је заснован на Linked Data принципима (W3C, 2006) који са једне стране елаборира стандарде за представљање и приступ подацима на Wебу (RDF, OWL, SPARQL), а са друге стране, принципи сугеришу коришћење хипервеза између података из различитих извора. Упркос напорима W3C конзорцијума (W3C је главна међународна организација за стандарде за Web-у), не постоји јединствена формула за имплементацију процеса објављивање података у Linked Data формату. Узимајући у обзир да је квалитет објављених повезаних отворених података одлучујући за будући развој Web-а, у овој докторској дисертацији, главни циљ је (1) дизајн и имплементација иновативног оквира за избор, анализу, конверзију, међусобно повезивање и објављивање података из различитих извора и (2) анализа примена овог приступа у фармацeутском домену. Предложена докторска дисертација детаљно истражује питање квалитета великих и повезаних екосистема података (енг. Linked Data Ecosystems), узимајући у обзир могућност поновног коришћења отворених података. Рад је мотивисан потребом да се омогући истраживачима из арапских земаља да употребом семантичких веб технологија повежу своје податке са отвореним подацима, као нпр. DBpedia-јом. Циљ је да се испита да ли отворени подаци из Арапских земаља омогућавају крајњим корисницима да добију додатне информације које нису доступне у појединачним скуповима података, поред интеграције у семантички Wеб простор. Докторска дисертација предлаже методологију за развој апликације за рад са повезаним (Linked) подацима и имплементира софтверско решење које омогућује претраживање консолидованог скупа података о лековима из изабраних арапских земаља. Консолидовани скуп података је имплементиран у облику Семантичког језера података (енг. Semantic Data Lake). Ова теза показује како фармацеутска индустрија има користи од примене иновативних технологија и истраживачких трендова из области семантичких технологија. Међутим, како је елаборирано у овој тези, потребно је боље разумевање специфичности арапског језика за имплементацију Linked Data алата и њухову примену са подацима из Арапских земаља

    Traductor de consultas SPARQL, formuladas sobre fuentes de datos incompletamente alineadas, que aporta una estimación de la calidad de la traducción.

    Get PDF
    147 p.Hoy en día existe en la Web un número cada vez mayor de conjuntos de datos enlazados de distinta procedencia, referentes a diferentes dominios y que se encuentran accesibles al público en general para ser libremente explotados. Esta tesis doctoral centra su estudio en el ámbito del procesamiento de consultas sobre dicha nube de conjuntos de datos enlazados, abordando las dificultades en su acceso por aspectos relacionados con su heterogeneidad. La principal contribución reside en el planteamiento de una nueva propuesta que permite traducir la consulta realizada sobre un conjunto de datos enlazado a otro sin que estos se encuentren completamente alineados y sin que el usuario tenga que conocer las características técnicas inherentes a cada fuente de datos. Esta propuesta se materializa en un traductor que transforma una consulta SPARQL, adecuadamente expresada en términos de los vocabularios utilizados en un conjunto de datos de origen, en otra consulta SPARQL adecuadamente expresada para un conjunto de datos objetivo que involucra diferentes vocabularios. La traducción se basa en alineaciones existentes entre términos en diferentes conjuntos de datos. Cuando el traductor no puede producir una consulta semánticamente equivalente debido a la escasez de alineaciones de términos, elsistema produce una aproximación semántica de la consulta para evitar devolver una respuesta vacía al usuario. La traducción a través de los distintos conjuntos de datos se logra gracias a la aplicación de un variado grupo de reglas de transformación. En esta tesis se han definido cinco tipos de reglas, dependiendo de la motivación de la transformación, que son: equivalencia, jerarquía, basadas en las respuestas de la consulta, basadas en el perfil de los recursos que aparecen en la consulta y basadas en las características asociadas a los recursos que aparecen en la consulta.Además, al no garantizar el traductor la preservación semántica debido a la heterogeneidad de los vocabularios se vuelve crucial el obtener una estimación de la calidad de la traducción producida. Por ello otra de las contribuciones relevantes de la tesis consiste en la definición del modo en que informar al usuario sobre la calidad de la consulta traducida, a través de dos indicadores: un factor de similaridad que se basa en el proceso de traducción en sí, y un indicador de calidad de los resultados, estimado gracias a un modelo predictivo.Finalmente, esta tesis aporta una demostración de la viabilidad estableciendo un marco de evaluación sobre el que se ha validado un prototipo del sistema

    Interest-based segmentation of online video platforms' viewers using semantic technologies

    Get PDF
    To better connect supply and demand for various products, marketers needed novel ways to segment and target their customers with relevant adverts. Over the last decade, companies that collected a large amount of psychographic and behavioural data about their customers emerged as the pioneers of hyper-targeting. For example, Google can infer people’s interests based on their search queries, Facebook based on their thoughts, and Amazon by analysing their shopping cart history. In this context, the traditional channel used for advertising – the media market – saw its revenues plummeting as it failed to infer viewers’ interests based on the programmes they are watching, and target them with bespoke adverts. In order to propose a methodology for inferring viewers’ interests, this study adopted an interdisciplinary approach by analysing the problem from the viewpoint of three disciplines: Customer Segmentation, Media Market, and Large Knowledge Bases. Critically assessing and integrating the disciplinary insights was required for a deep understanding of: the reasons for which psychographic variables like interests and values are a better predictor for consumer behaviour as opposed to demographic variables; the various types of data collection and analysis methods used in the media industry; as well as the state of the art in terms of detecting concepts from text and linking them to various ontologies for inferring interests. Building on these insights, a methodology was proposed that can fully automate the process of inferring viewers interests by semantically analysing the description of the programmes they watch, and correlating it with data about their viewing history. While the methodology was deemed valid from a theoretical point of view, an extensive empirical validation was also undertaken for a better understanding of its applicability. Programme metadata for 320 programmes from a large broadcaster was analysed together with the viewing history of over 50,000 people during a three-year period. The findings from the validation were eventually used to further refine the methodology and show that is it possible not only to infer individual viewers interests based on the programmes watched, but also to cluster the audience based on their content consumption habits and track the performance of various topics in terms of attracting new viewers. Having an effective way to infer viewers’ interests has various applications for the media market, most notably in the areas of better segmenting and targeting audiences, developing content that matches viewers’ interests, or improving existing recommendation engine

    Strategies for Managing Linked Enterprise Data

    Get PDF
    Data, information and knowledge become key assets of our 21st century economy. As a result, data and knowledge management become key tasks with regard to sustainable development and business success. Often, knowledge is not explicitly represented residing in the minds of people or scattered among a variety of data sources. Knowledge is inherently associated with semantics that conveys its meaning to a human or machine agent. The Linked Data concept facilitates the semantic integration of heterogeneous data sources. However, we still lack an effective knowledge integration strategy applicable to enterprise scenarios, which balances between large amounts of data stored in legacy information systems and data lakes as well as tailored domain specific ontologies that formally describe real-world concepts. In this thesis we investigate strategies for managing linked enterprise data analyzing how actionable knowledge can be derived from enterprise data leveraging knowledge graphs. Actionable knowledge provides valuable insights, supports decision makers with clear interpretable arguments, and keeps its inference processes explainable. The benefits of employing actionable knowledge and its coherent management strategy span from a holistic semantic representation layer of enterprise data, i.e., representing numerous data sources as one, consistent, and integrated knowledge source, to unified interaction mechanisms with other systems that are able to effectively and efficiently leverage such an actionable knowledge. Several challenges have to be addressed on different conceptual levels pursuing this goal, i.e., means for representing knowledge, semantic data integration of raw data sources and subsequent knowledge extraction, communication interfaces, and implementation. In order to tackle those challenges we present the concept of Enterprise Knowledge Graphs (EKGs), describe their characteristics and advantages compared to existing approaches. We study each challenge with regard to using EKGs and demonstrate their efficiency. In particular, EKGs are able to reduce the semantic data integration effort when processing large-scale heterogeneous datasets. Then, having built a consistent logical integration layer with heterogeneity behind the scenes, EKGs unify query processing and enable effective communication interfaces for other enterprise systems. The achieved results allow us to conclude that strategies for managing linked enterprise data based on EKGs exhibit reasonable performance, comply with enterprise requirements, and ensure integrated data and knowledge management throughout its life cycle

    Traductor de consultas SPARQL, formuladas sobre fuentes de datos incompletamente alineadas, que aporta una estimación de la calidad de la traducción.

    Get PDF
    147 p.Hoy en día existe en la Web un número cada vez mayor de conjuntos de datos enlazados de distinta procedencia, referentes a diferentes dominios y que se encuentran accesibles al público en general para ser libremente explotados. Esta tesis doctoral centra su estudio en el ámbito del procesamiento de consultas sobre dicha nube de conjuntos de datos enlazados, abordando las dificultades en su acceso por aspectos relacionados con su heterogeneidad. La principal contribución reside en el planteamiento de una nueva propuesta que permite traducir la consulta realizada sobre un conjunto de datos enlazado a otro sin que estos se encuentren completamente alineados y sin que el usuario tenga que conocer las características técnicas inherentes a cada fuente de datos. Esta propuesta se materializa en un traductor que transforma una consulta SPARQL, adecuadamente expresada en términos de los vocabularios utilizados en un conjunto de datos de origen, en otra consulta SPARQL adecuadamente expresada para un conjunto de datos objetivo que involucra diferentes vocabularios. La traducción se basa en alineaciones existentes entre términos en diferentes conjuntos de datos. Cuando el traductor no puede producir una consulta semánticamente equivalente debido a la escasez de alineaciones de términos, elsistema produce una aproximación semántica de la consulta para evitar devolver una respuesta vacía al usuario. La traducción a través de los distintos conjuntos de datos se logra gracias a la aplicación de un variado grupo de reglas de transformación. En esta tesis se han definido cinco tipos de reglas, dependiendo de la motivación de la transformación, que son: equivalencia, jerarquía, basadas en las respuestas de la consulta, basadas en el perfil de los recursos que aparecen en la consulta y basadas en las características asociadas a los recursos que aparecen en la consulta.Además, al no garantizar el traductor la preservación semántica debido a la heterogeneidad de los vocabularios se vuelve crucial el obtener una estimación de la calidad de la traducción producida. Por ello otra de las contribuciones relevantes de la tesis consiste en la definición del modo en que informar al usuario sobre la calidad de la consulta traducida, a través de dos indicadores: un factor de similaridad que se basa en el proceso de traducción en sí, y un indicador de calidad de los resultados, estimado gracias a un modelo predictivo.Finalmente, esta tesis aporta una demostración de la viabilidad estableciendo un marco de evaluación sobre el que se ha validado un prototipo del sistema
    corecore