535 research outputs found
Getting More out of Biomedical Documents with GATE's Full Lifecycle Open Source Text Analytics.
This software article describes the GATE family of open source text analysis tools and processes. GATE is one of the most
widely used systems of its type with yearly download rates of tens of thousands and many active users in both academic
and industrial contexts. In this paper we report three examples of GATE-based systems operating in the life sciences and in
medicine. First, in genome-wide association studies which have contributed to discovery of a head and neck cancer
mutation association. Second, medical records analysis which has significantly increased the statistical power of treatment/
outcome models in the UK’s largest psychiatric patient cohort. Third, richer constructs in drug-related searching. We also
explore the ways in which the GATE family supports the various stages of the lifecycle present in our examples. We conclude
that the deployment of text mining for document abstraction or rich search and navigation is best thought of as a process,
and that with the right computational tools and data collection strategies this process can be made defined and repeatable.
The GATE research programme is now 20 years old and has grown from its roots as a specialist development tool for text
processing to become a rather comprehensive ecosystem, bringing together software developers, language engineers and
research staff from diverse fields. GATE now has a strong claim to cover a uniquely wide range of the lifecycle of text analysis
systems. It forms a focal point for the integration and reuse of advances that have been made by many people (the majority
outside of the authors’ own group) who work in text processing for biomedicine and other areas. GATE is available online
,1. under GNU open source licences and runs on all major operating systems. Support is available from an active user and
developer community and also on a commercial basis
MT techniques in a retrieval system of semantically enriched patents
This paper focuses on how automatic
translation techniques integrated in a
patent retrieval system increase its capabilities and make possible extended features and functionalities. We describe 1)
a novel methodology for natural language
to SPARQL translation based on a grammar–
ontology interoperability automation and a query grammar for the patents domain; 2) a devised strategy for statisticalbased
translation of patents that allows to transfer semantic annotations to the target
language; 3) a built-in knowledge representation infrastructure that uses multilingual semantic annotations; and 4) an online application that offers a multilingual
search interface over structural knowledge
databases (domain ontologies) and multilingual documents (biomedical patents)
that have been automatically translated.Peer ReviewedPostprint (published version
Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet
This research presents a multi-domain solution that uses text and images to iteratively improve automated information extraction. Stage I uses local text surrounding an embedded image to provide clues that help rank-order possible image annotations. These annotations are forwarded to Stage II, where the image annotations from Stage I are used as highly-relevant super-words to improve extraction of topics. The model probabilities from the super-words in Stage II are forwarded to Stage III where they are used to refine the automated image annotation developed in Stage I. All stages demonstrate improvement over existing equivalent algorithms in the literature
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
In the pursuit of a semantic similarity metric based on UMLS annotations for articles in PubMed Central
Motivation
Although full-text articles are provided by the publishers in electronic formats, it remains a challenge to find related work beyond the title and abstract context. Identifying related articles based on their abstract is indeed a good starting point; this process is straightforward and does not consume as many resources as full-text based similarity would require. However, further analyses may require in-depth understanding of the full content. Two articles with highly related abstracts can be substantially different regarding the full content. How similarity differs when considering title-and-abstract versus full-text and which semantic similarity metric provides better results when dealing with full-text articles are the main issues addressed in this manuscript.
Methods
We have benchmarked three similarity metrics – BM25, PMRA, and Cosine, in order to determine which one performs best when using concept-based annotations on full-text documents. We also evaluated variations in similarity values based on title-and-abstract against those relying on full-text. Our test dataset comprises the Genomics track article collection from the 2005 Text Retrieval Conference. Initially, we used an entity recognition software to semantically annotate titles and abstracts as well as full-text with concepts defined in the Unified Medical Language System (UMLS®). For each article, we created a document profile, i.e., a set of identified concepts, term frequency, and inverse document frequency; we then applied various similarity metrics to those document profiles. We considered correlation, precision, recall, and F1 in order to determine which similarity metric performs best with concept-based annotations. For those full-text articles available in PubMed Central Open Access (PMC-OA), we also performed dispersion analyses in order to understand how similarity varies when considering full-text articles.
Results
We have found that the PubMed Related Articles similarity metric is the most suitable for full-text articles annotated with UMLS concepts. For similarity values above 0.8, all metrics exhibited an F1 around 0.2 and a recall around 0.1; BM25 showed the highest precision close to 1; in all cases the concept-based metrics performed better than the word-stem-based one. Our experiments show that similarity values vary when considering only title-and-abstract versus full-text similarity. Therefore, analyses based on full-text become useful when a given research requires going beyond title and abstract, particularly regarding connectivity across articles.
Availability
Visualization available at ljgarcia.github.io/semsim.benchmark/, data available at http://dx.doi.org/10.5281/zenodo.13323.The authors acknowledge the support from the members of Temporal Knowledge Bases Group at Universitat Jaume I. Funding: LJGC and AGC are both self-funded, RB is funded by the “Ministerio de EconomĂa y Competitividad” with contract number TIN2011-24147
Bridging semantic gap: learning and integrating semantics for content-based retrieval
Digital cameras have entered ordinary homes and produced^incredibly large number
of photos. As a typical example of broad image domain, unconstrained consumer
photos vary significantly. Unlike professional or domain-specific images, the objects
in the photos are ill-posed, occluded, and cluttered with poor lighting, focus, and
exposure. Content-based image retrieval research has yet to bridge the semantic gap
between computable low-level information and high-level user interpretation.
In this thesis, we address the issue of semantic gap with a structured learning
framework to allow modular extraction of visual semantics. Semantic image regions
(e.g. face, building, sky etc) are learned statistically, detected directly from image
without segmentation, reconciled across multiple scales, and aggregated spatially to
form compact semantic index. To circumvent the ambiguity and subjectivity in a
query, a new query method that allows spatial arrangement of visual semantics is
proposed. A query is represented as a disjunctive normal form of visual query terms
and processed using fuzzy set operators.
A drawback of supervised learning is the manual labeling of regions as training
samples. In this thesis, a new learning framework to discover local semantic patterns
and to generate their samples for training with minimal human intervention has been
developed. The discovered patterns can be visualized and used in semantic indexing.
In addition, three new class-based indexing schemes are explored. The winnertake-
all scheme supports class-based image retrieval. The class relative scheme and
the local classification scheme compute inter-class memberships and local class patterns
as indexes for similarity matching respectively. A Bayesian formulation is
proposed to unify local and global indexes in image comparison and ranking that
resulted in superior image retrieval performance over those of single indexes.
Query-by-example experiments on 2400 consumer photos with 16 semantic queries
show that the proposed approaches have significantly better (18% to 55%) average
precisions than a high-dimension feature fusion approach. The thesis has paved
two promising research directions, namely the semantics design approach and the
semantics discovery approach. They form elegant dual frameworks that exploits
pattern classifiers in learning and integrating local and global image semantics
Topic identification using filtering and rule generation algorithm for textual document
Information stored digitally in text documents are seldom arranged according to specific topics. The necessity to read whole documents is time-consuming and decreases the interest
for searching information. Most existing topic identification methods depend on occurrence
of terms in the text. However, not all frequent occurrence terms are relevant. The term
extraction phase in topic identification method has resulted in extracted terms that might have
similar meaning which is known as synonymy problem. Filtering and rule generation
algorithms are introduced in this study to identify topic in textual documents. The proposed filtering algorithm (PFA) will extract the most relevant terms from text and solve synonym roblem amongst the extracted terms. The rule generation algorithm (TopId) is proposed to
identify topic for each verse based on the extracted terms. The PFA will process and filter
each sentence based on nouns and predefined keywords to produce suitable terms for the
topic. Rules are then generated from the extracted terms using the rule-based classifier. An experimental design was performed on 224 English translated Quran verses which are related to female issues. Topics identified by both TopId and Rough Set technique were compared and later verified by experts. PFA has successfully extracted more relevant terms compared to other filtering techniques. TopId has identified topics that are closer to the topics from experts with an accuracy of 70%. The proposed algorithms were able to extract relevant terms without losing important terms and identify topic in the verse
- …