1,436 research outputs found
Sequential pattern mining for discovering gene interactions and their contextual information from biomedical texts
International audienceBackgroundDiscovering gene interactions and their characterizations from biological text collections is a crucial issue in bioinformatics. Indeed, text collections are large and it is very difficult for biologists to fully take benefit from this amount of knowledge. Natural Language Processing (NLP) methods have been applied to extract background knowledge from biomedical texts. Some of existing NLP approaches are based on handcrafted rules and thus are time consuming and often devoted to a specific corpus. Machine learning based NLP methods, give good results but generate outcomes that are not really understandable by a user.ResultsWe take advantage of an hybridization of data mining and natural language processing to propose an original symbolic method to automatically produce patterns conveying gene interactions and their characterizations. Therefore, our method not only allows gene interactions but also semantics information on the extracted interactions (e.g., modalities, biological contexts, interaction types) to be detected. Only limited resource is required: the text collection that is used as a training corpus. Our approach gives results comparable to the results given by state-of-the-art methods and is even better for the gene interaction detection in AIMed.ConclusionsExperiments show how our approach enables to discover interactions and their characterizations. To the best of our knowledge, there is few methods that automatically extract the interactions and also associated semantics information. The extracted gene interactions from PubMed are available through a simple web interface at https://bingotexte.greyc.fr/ webcite. The software is available at https://bingo2.greyc.fr/?q=node/22 webcite
Learning structure and schemas from heterogeneous domains in networked systems: a survey
The rapidly growing amount of available digital documents of various formats and the possibility to access these through internet-based technologies in distributed environments, have led to the necessity to develop solid methods to properly organize and structure documents in large digital libraries and repositories. Specifically, the extremely large size of document collections make it impossible to manually organize such documents. Additionally, most of the document sexist in an unstructured form and do not follow any schemas. Therefore, research efforts in this direction are being dedicated to automatically infer structure and schemas. This is essential in order to better organize huge collections as well as to effectively and efficiently retrieve documents in heterogeneous domains in networked system. This paper presents a survey of the state-of-the-art methods for inferring structure from documents and schemas in networked environments. The survey is organized around the most important application domains, namely, bio-informatics, sensor networks, social networks, P2Psystems, automation and control, transportation and privacy preserving for which we analyze the recent developments on dealing with unstructured data in such domains.Peer ReviewedPostprint (published version
Towards Constructing a Corpus for Studying the Effects of Treatments and Substances Reported in PubMed Abstracts
We present the construction of an annotated corpus of PubMed abstracts
reporting about positive, negative or neutral effects of treatments or
substances. Our ultimate goal is to annotate one sentence (rationale) for each
abstract and to use this resource as a training set for text classification of
effects discussed in PubMed abstracts. Currently, the corpus consists of 750
abstracts. We describe the automatic processing that supports the corpus
construction, the manual annotation activities and some features of the medical
language in the abstracts selected for the annotated corpus. It turns out that
recognizing the terminology and the abbreviations is key for determining the
rationale sentence. The corpus will be applied to improve our classifier, which
currently has accuracy of 78.80% achieved with normalization of the abstract
terms based on UMLS concepts from specific semantic groups and an SVM with a
linear kernel. Finally, we discuss some other possible applications of this
corpus.Comment: medical relation extraction, rationale extraction, effects and
treatments, bioNL
Text Analytics: the convergence of Big Data and Artificial Intelligence
The analysis of the text content in emails, blogs,
tweets, forums and other forms of textual communication
constitutes what we call text analytics. Text analytics is applicable
to most industries: it can help analyze millions of emails; you can
analyze customers’ comments and questions in forums; you can
perform sentiment analysis using text analytics by measuring
positive or negative perceptions of a company, brand, or product.
Text Analytics has also been called text mining, and is a subcategory
of the Natural Language Processing (NLP) field, which is one of the
founding branches of Artificial Intelligence, back in the 1950s, when
an interest in understanding text originally developed. Currently
Text Analytics is often considered as the next step in Big Data
analysis. Text Analytics has a number of subdivisions: Information
Extraction, Named Entity Recognition, Semantic Web annotated
domain’s representation, and many more. Several techniques are
currently used and some of them have gained a lot of attention,
such as Machine Learning, to show a semisupervised enhancement
of systems, but they also present a number of limitations which
make them not always the only or the best choice. We conclude
with current and near future applications of Text Analytics
A Dependency Parsing Approach to Biomedical Text Mining
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language.
This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains.
The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time.
To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization.
To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships.
Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.Siirretty Doriast
- …