380 research outputs found

    Knowledge Extraction from Textual Resources through Semantic Web Tools and Advanced Machine Learning Algorithms for Applications in Various Domains

    Get PDF
    Nowadays there is a tremendous amount of unstructured data, often represented by texts, which is created and stored in variety of forms in many domains such as patients' health records, social networks comments, scientific publications, and so on. This volume of data represents an invaluable source of knowledge, but unfortunately it is challenging its mining for machines. At the same time, novel tools as well as advanced methodologies have been introduced in several domains, improving the efficacy and the efficiency of data-based services. Following this trend, this thesis shows how to parse data from text with Semantic Web based tools, feed data into Machine Learning methodologies, and produce services or resources to facilitate the execution of some tasks. More precisely, the use of Semantic Web technologies powered by Machine Learning algorithms has been investigated in the Healthcare and E-Learning domains through not yet experimented methodologies. Furthermore, this thesis investigates the use of some state-of-the-art tools to move data from texts to graphs for representing the knowledge contained in scientific literature. Finally, the use of a Semantic Web ontology and novel heuristics to detect insights from biological data in form of graph are presented. The thesis contributes to the scientific literature in terms of results and resources. Most of the material presented in this thesis derives from research papers published in international journals or conference proceedings

    Text Mining and Gene Expression Analysis Towards Combined Interpretation of High Throughput Data

    Get PDF
    Microarrays can capture gene expression activity for thousands of genes simultaneously and thus make it possible to analyze cell physiology and disease processes on molecular level. The interpretation of microarray gene expression experiments profits from knowledge on the analyzed genes and proteins and the biochemical networks in which they play a role. The trend is towards the development of data analysis methods that integrate diverse data types. Currently, the most comprehensive biomedical knowledge source is a large repository of free text articles. Text mining makes it possible to automatically extract and use information from texts. This thesis addresses two key aspects, biomedical text mining and gene expression data analysis, with the focus on providing high-quality methods and data that contribute to the development of integrated analysis approaches. The work is structured in three parts. Each part begins by providing the relevant background, and each chapter describes the developed methods as well as applications and results. Part I deals with biomedical text mining: Chapter 2 summarizes the relevant background of text mining; it describes text mining fundamentals, important text mining tasks, applications and particularities of text mining in the biomedical domain, and evaluation issues. In Chapter 3, a method for generating high-quality gene and protein name dictionaries is described. The analysis of the generated dictionaries revealed important properties of individual nomenclatures and the used databases (Fundel and Zimmer, 2006). The dictionaries are publicly available via a Wiki, a web service, and several client applications (Szugat et al., 2005). In Chapter 4, methods for the dictionary-based recognition of gene and protein names in texts and their mapping onto unique database identifiers are described. These methods make it possible to extract information from texts and to integrate text-derived information with data from other sources. Three named entity identification systems have been set up, two of them building upon the previously existing tool ProMiner (Hanisch et al., 2003). All of them have shown very good performance in the BioCreAtIvE challenges (Fundel et al., 2005a; Hanisch et al., 2005; Fundel and Zimmer, 2007). In Chapter 5, a new method for relation extraction (Fundel et al., 2007) is presented. It was applied on the largest collection of biomedical literature abstracts, and thus a comprehensive network of human gene and protein relations has been generated. A classification approach (Küffner et al., 2006) can be used to specify relation types further; e. g., as activating, direct physical, or gene regulatory relation. Part II deals with gene expression data analysis: Gene expression data needs to be processed so that differentially expressed genes can be identified. Gene expression data processing consists of several sequential steps. Two important steps are normalization, which aims at removing systematic variances between measurements, and quantification of differential expression by p-value and fold change determination. Numerous methods exist for these tasks. Chapter 6 describes the relevant background of gene expression data analysis; it presents the biological and technical principles of microarrays and gives an overview of the most relevant data processing steps. Finally, it provides a short introduction to osteoarthritis, which is in the focus of the analyzed gene expression data sets. In Chapter 7, quality criteria for the selection of normalization methods are described, and a method for the identification of differentially expressed genes is proposed, which is appropriate for data with large intensity variances between spots representing the same gene (Fundel et al., 2005b). Furthermore, a system is described that selects an appropriate combination of feature selection method and classifier, and thus identifies genes which lead to good classification results and show consistent behavior in different sample subgroups (Davis et al., 2006). The analysis of several gene expression data sets dealing with osteoarthritis is described in Chapter 8. This chapter contains the biomedical analysis of relevant disease processes and distinct disease stages (Aigner et al., 2006a), and a comparison of various microarray platforms and osteoarthritis models. Part III deals with integrated approaches and thus provides the connection between parts I and II: Chapter 9 gives an overview of different types of integrated data analysis approaches, with a focus on approaches that integrate gene expression data with manually compiled data, large-scale networks, or text mining. In Chapter 10, a method for the identification of genes which are consistently regulated and have a coherent literature background (Küffner et al., 2005) is described. This method indicates how gene and protein name identification and gene expression data can be integrated to return clusters which contain genes that are relevant for the respective experiment together with literature information that supports interpretation. Finally, in Chapter 11 ideas on how the described methods can contribute to current research and possible future directions are presented

    Grounding event references in news

    Get PDF
    Events are frequently discussed in natural language, and their accurate identification is central to language understanding. Yet they are diverse and complex in ontology and reference; computational processing hence proves challenging. News provides a shared basis for communication by reporting events. We perform several studies into news event reference. One annotation study characterises each news report in terms of its update and topic events, but finds that topic is better consider through explicit references to background events. In this context, we propose the event linking task which—analogous to named entity linking or disambiguation—models the grounding of references to notable events. It defines the disambiguation of an event reference as a link to the archival article that first reports it. When two references are linked to the same article, they need not be references to the same event. Event linking hopes to provide an intuitive approximation to coreference, erring on the side of over-generation in contrast with the literature. The task is also distinguished in considering event references from multiple perspectives over time. We diagnostically evaluate the task by first linking references to past, newsworthy events in news and opinion pieces to an archive of the Sydney Morning Herald. The intensive annotation results in only a small corpus of 229 distinct links. However, we observe that a number of hyperlinks targeting online news correspond to event links. We thus acquire two large corpora of hyperlinks at very low cost. From these we learn weights for temporal and term overlap features in a retrieval system. These noisy data lead to significant performance gains over a bag-of-words baseline. While our initial system can accurately predict many event links, most will require deep linguistic processing for their disambiguation

    Query expansion by relying on the structure of knowledge bases

    Get PDF
    Query expansion techniques aim at improving the results achieved by a user's query by means of introducing new expansion terms, called expansion features. Expansion features introduce new concepts that are semantically related with the concepts in the user's query and that allow retrieving documents that otherwise would be not. Thus, the challenge is to select those expansion features that are capable of improving the results the most. A bad choice of expansion features may be counterproductive. In this thesis, we use an external source of information, a Knowledge Base (KB), as source expansion features. A knowledge base consists of a set of entries, each of which represent a concept and has, at least, a name, which can be used as expansion feature. The techniques framed in this family have become more popular due to the increase of available data, as, for example, Wikipedia. Particularly, we focus on exploiting those KB whose entries are linked to each other, conforming a graph of entries. To the best of our knowledge, most of the techniques framed on the KB family rely on some kind of text analysis, such as explicit semantic analysis, or are based on other existing query expansion techniques such as pseudo relevance feedback. However, the underlying net-work structure of KBs has been barely exploited. In this thesis, we show that the structure can be used to identify reliable expansion feature for the query expansion process. Thus, we design a novel expansion technique, Structural Query Expansion (SQE). For SQE to benefit from the particular structures of KBs, we propose a methodology to identify the structural characteristics that, given a query, allow identifying those nodes in the KB that are good candidates to be used as source of expansion features, called from now on expansion nodes. The methodology consists in building a ground truth that connects each query from a query set with those nodes of the KB that when used to extract the expansion features allow achieving the best results in terms of precision, we call the set of those nodes, expansion query graph. Then, we compare the expansion query graph of each query to find shared characteristics. SQE materializes the revealed characteristics into a set of structural motifs. In the particular case of Wikipedia, we have found two motifs called triangular and square. In the former, the query node and the expansion node are doubly linked and the expansion node belongs to, at least, the same categories as the query node. In the latter, the query node and the expansion node also are doubly linked and their categories are connected somehow. These motifs are used to, given a query and its query nodes, identify all the expansion nodes which are used as source of expansion features. Notice that we have designed this technique to be orthogonal to others because is fully decoupled from the search process and does not depend on the particular collection of documents. We have tested our techniques with three different datasets to avoid any kind of overfitting. The results are shown to be consistent among the three of them. Also, the results which are validated with statistical significance tests, show that SQE is capable to achieve up to 150% improvement in the precision. Finally, we show the performance of our technique which runs in sub-second times (358.23ms at maximum) which makes it feasible for a real query expansion system. This is especially relevant because, to the best of our knowledge, the performance is an aspect that is being ignored in most of the works and, thus, it is difficult to know whether they can be include in real systems or not.Les tècniques d'expansió de consultes tenen com a objecte millorar els resultats obtinguts per la consulta d'un usuari a partir de la introducció de termes d'expansió, anomenat característiques d'expansió. Les característiques d'expansió introdueixen nous conceptes que estan relacionats semànticament amb els conceptes de la consulta de l'usuari i que permeten obtenir documents que d'altra manera no es podrien obtenir. Per tant, el repte és seleccionar les característiques d'expansió que són capaces de millorar al màxim els resultats, doncs una mala elecció pot ser contra-productiva. En aquesta tesis, utilitzem una font externa d'informació, una Base de Coneixement (KB), com a font de característiques d'expansió. Una KB és un conjunt d'entrades, cadascuna de les quals representa un concepte i que té, com a mínim, un nom, que és susceptible de ser usat com a característica d'expansió. Les tècniques emmarcades en aquesta família han esdevingut populars degut al creixement de la informació disponible, per exemple, Wikipedia. Particularment, nosaltres en centrem en utilitzar aquelles KB les entrades de les quals estan relacionades entre si, conformant d'aquesta manera, un graf d'entrades. Segons les nostres informacions, la majora de les tècniques emmarcades en aquesta família utilitzen algun tipus d'anàlisi lingüístic, o estan basades en d'altres tècniques com relevance feedback. Ara bé, la estructura subjacent de la xarxa gairebé no s'ha utilitzat. En aquesta tesis, mostrem que la estructura es pot fer servir per identificar característiques d'expansió fiables pel procés d'expansió de consultes. De fet, proposem una tècnica d'expansió novell, Structural Query Expansion (SQE), que la explota. Perquè SQE pugui beneficiar-se de les particularitats estructurals de les KBs, hem proposat també una metodologia per revelar les característiques estructurals que, donada una consulta, permeten identificar aquells nodes que són una bona font de característiques d'expansió, els anomenats, nodes d'expansió. Aquesta metodologia consisteix en construir un ground truth que relaciona una conjunt de consultes amb el seu optimal expansion query graph. L'optimal expansion query graph és el conjunt de nodes d'expansió que quan s'utilitzen com a font de característiques d'expansió, permeten obtenir els millors resultats en termes de precisió. Un cop tenim els optimal expansion query graphs, els comparem entre si per a buscar característiques compartides. SQE materialitza aquestes característiques en un conjunt de motius estructurals. En el cas de Wikipedia hem trobat 2 motius: el triangular i el quadràtic. En els dos casos el node de la consulta ha d'estar doblement lincat amb el node d'expansió. En el triangular, les categories del node d'expansió ha de pertànyer, com a mínim, a les mateixes categories que el node de la consulta, mentre que en el quadràtic tan sols cal que les categories del node de la consulta i el d'expansió estiguin relacionades. Aquest motius s'utilitzen per, donada una consulta, identificar tots els seus nodes d'expansió. Hem dissenyat aquesta tècnica com una tècnica ortogonal a d'altres ja que està desacoblada del procés de cerca i no depèn de la col·lecció de documents. Hem provar la nostra tècnica amb 3 jocs de dades diferents per a evitar qualsevol tipus d'especialització. Els resultats són consistents entre els tres. Hem validat els resultats amb testos de significança estadística obtenint millores del 150% en la precisió. Finalment, pel que fa el rendiment de la nostra proposta, mostrem que s'executa en mil·lisegons, i això la fa susceptible de ser utilitzada en sistemes d'expansió reals. Això és especialment rellevant perquè, segons les nostres informacions, aquest és un aspecte que s'ignora en la literatura i, per tant, és difícil de saber la viabilitat de les propostes que existeixen en entorns reals

    Spectators’ aesthetic experiences of sound and movement in dance performance

    Get PDF
    In this paper we present a study of spectators’ aesthetic experiences of sound and movement in live dance performance. A multidisciplinary team comprising a choreographer, neuroscientists and qualitative researchers investigated the effects of different sound scores on dance spectators. What would be the impact of auditory stimulation on kinesthetic experience and/or aesthetic appreciation of the dance? What would be the effect of removing music altogether, so that spectators watched dance while hearing only the performers’ breathing and footfalls? We investigated audience experience through qualitative research, using post-performance focus groups, while a separately conducted functional brain imaging (fMRI) study measured the synchrony in brain activity across spectators when they watched dance with sound or breathing only. When audiences watched dance accompanied by music the fMRI data revealed evidence of greater intersubject synchronisation in a brain region consistent with complex auditory processing. The audience research found that some spectators derived pleasure from finding convergences between two complex stimuli (dance and music). The removal of music and the resulting audibility of the performers’ breathing had a significant impact on spectators’ aesthetic experience. The fMRI analysis showed increased synchronisation among observers, suggesting greater influence of the body when interpreting the dance stimuli. The audience research found evidence of similar corporeally focused experience. The paper discusses possible connections between the findings of our different approaches, and considers the implications of this study for interdisciplinary research collaborations between arts and sciences
    corecore