4,818 research outputs found
Semantic web technology to support learning about the semantic web
This paper describes ASPL, an Advanced Semantic Platform for Learning, designed using the Magpie framework with an aim to support students learning about the Semantic Web research area. We describe the evolution of ASPL and illustrate how we used the results from a formal evaluation of the initial system to re-design the user functionalities. The second version of ASPL semantically interprets the results provided by a non-semantic web mining tool and uses them to support various forms of semantics-assisted exploration, based on pedagogical strategies such as performing later reasoning steps and problem space filtering
Predicting Pancreatic Cancer Using Support Vector Machine
This report presents an approach to predict pancreatic cancer using Support Vector Machine Classification algorithm. The research objective of this project it to predict pancreatic cancer on just genomic, just clinical and combination of genomic and clinical data. We have used real genomic data having 22,763 samples and 154 features per sample. We have also created Synthetic Clinical data having 400 samples and 7 features per sample in order to predict accuracy of just clinical data. To validate the hypothesis, we have combined synthetic clinical data with subset of features from real genomic data. In our results, we observed that prediction accuracy, precision, recall with just genomic data is 80.77%, 20%, 4%. Prediction accuracy, precision, recall with just synthetic clinical data is 93.33%, 95%, 30%. While prediction accuracy, precision, recall for combination of real genomic and synthetic clinical data is 90.83%, 10%, 5%. The combination of real genomic and synthetic clinical data decreased the accuracy since the genomic data is weakly correlated. Thus we conclude that the combination of genomic and clinical data does not improve pancreatic cancer prediction accuracy. A dataset with more significant genomic features might help to predict pancreatic cancer more accurately
Detecting Important Terms in Source Code for Program Comprehension
Software Engineering research has become extremely dependent on terms (words in textual data) extracted from source code. Different techniques have been proposed to extract the most important\u27\u27 terms from code. These terms are typically used as input to research prototypes: the quality of the output of these prototypes will depend on the quality of the term extraction technique. At present no consensus exists about which technique predicts the best terms for code comprehension. We perform a literature review, and propose a unified prediction model based on a Naive Bayes algorithm. We evaluate our model in a field study with professional programmers, as well as a standard 10-fold synthetic study. We found our model predicts the top quartile of the most-important terms with approximately 50% precision and recall, outperforming other popular techniques. We found the predictions from our model to help programmers to the same degree as the gold set
ALBAYZIN 2018 spoken term detection evaluation: a multi-domain international evaluation in Spanish
[Abstract] Search on speech (SoS) is a challenging area due to the huge amount of information stored in audio and video repositories. Spoken term detection (STD) is an SoS-related task aiming to retrieve data from a speech repository given a textual representation of a search term (which can include one or more words). This paper presents a multi-domain internationally open evaluation for STD in Spanish. The evaluation has been designed carefully so that several analyses of the main results can be carried out. The evaluation task aims at retrieving the speech files that contain the terms, providing their start and end times, and a score that reflects the confidence given to the detection. Three different Spanish speech databases that encompass different domains have been employed in the evaluation: the MAVIR database, which comprises a set of talks from workshops; the RTVE database, which includes broadcast news programs; and the COREMAH database, which contains 2-people spontaneous speech conversations about different topics. We present the evaluation itself, the three databases, the evaluation metric, the systems submitted to the evaluation, the results, and detailed post-evaluation analyses based on some term properties (within-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and native/foreign terms). Fusion results of the primary systems submitted to the evaluation are also presented. Three different research groups took part in the evaluation, and 11 different systems were submitted. The obtained results suggest that the STD task is still in progress and performance is highly sensitive to changes in the data domain.Ministerio de EconomĂa y Competitividad; TIN2015-64282-R,Ministerio de EconomĂa y Competitividad; RTI2018-093336-B-C22Ministerio de EconomĂa y Competitividad; TEC2015-65345-PXunta de Galicia; ED431B 2016/035Xunta de Galicia; GPC ED431B 2019/003Xunta de Galicia; GRC 2014/024Xunta de Galicia; ED431G/01Xunta de Galicia; ED431G/04AgrupaciĂłn estratĂ©xica consolidada; GIU16/68Ministerio de EconomĂa y Competitividad; TEC2015-68172-C2-1-
Typos-aware Bottlenecked Pre-Training for Robust Dense Retrieval
Current dense retrievers (DRs) are limited in their ability to effectively
process misspelled queries, which constitute a significant portion of query
traffic in commercial search engines. The main issue is that the pre-trained
language model-based encoders used by DRs are typically trained and fine-tuned
using clean, well-curated text data. Misspelled queries are typically not found
in the data used for training these models, and thus misspelled queries
observed at inference time are out-of-distribution compared to the data used
for training and fine-tuning. Previous efforts to address this issue have
focused on \textit{fine-tuning} strategies, but their effectiveness on
misspelled queries remains lower than that of pipelines that employ separate
state-of-the-art spell-checking components. To address this challenge, we
propose ToRoDer (TypOs-aware bottlenecked pre-training for RObust DEnse
Retrieval), a novel re-training strategy for DRs that increases their
robustness to misspelled queries while preserving their effectiveness in
downstream retrieval tasks. ToRoDer utilizes an encoder-decoder architecture
where the encoder takes misspelled text with masked tokens as input and outputs
bottlenecked information to the decoder. The decoder then takes as input the
bottlenecked embeddings, along with token embeddings of the original text with
the misspelled tokens masked out. The pre-training task is to recover the
masked tokens for both the encoder and decoder. Our extensive experimental
results and detailed ablation studies show that DRs pre-trained with ToRoDer
exhibit significantly higher effectiveness on misspelled queries, sensibly
closing the gap with pipelines that use a separate, complex spell-checker
component, while retaining their effectiveness on correctly spelled queries.Comment: 10 pages, accepted at SIGIR-A
Computational acquisition of knowledge in small-data environments: a case study in the field of energetics
The UKâs defence industry is accelerating its implementation of artificial intelligence, including
expert systems and natural language processing (NLP) tools designed to supplement human
analysis. This thesis examines the limitations of NLP tools in small-data environments (common
in defence) in the defence-related energetic-materials domain. A literature review identifies
the domain-specific challenges of developing an expert system (specifically an ontology). The
absence of domain resources such as labelled datasets and, most significantly, the preprocessing
of text resources are identified as challenges. To address the latter, a novel general-purpose
preprocessing pipeline specifically tailored for the energetic-materials domain is developed. The
effectiveness of the pipeline is evaluated.
Examination of the interface between using NLP tools in data-limited environments to either
supplement or replace human analysis completely is conducted in a study examining the subjective
concept of importance. A methodology for directly comparing the ability of NLP tools
and experts to identify important points in the text is presented. Results show the participants
of the study exhibit little agreement, even on which points in the text are important. The NLP,
expert (author of the text being examined) and participants only agree on general statements.
However, as a group, the participants agreed with the expert. In data-limited environments,
the extractive-summarisation tools examined cannot effectively identify the important points
in a technical document akin to an expert.
A methodology for the classification of journal articles by the technology readiness level (TRL)
of the described technologies in a data-limited environment is proposed. Techniques to overcome
challenges with using real-world data such as class imbalances are investigated. A methodology
to evaluate the reliability of human annotations is presented. Analysis identifies a lack of
agreement and consistency in the expert evaluation of document TRL.Open Acces
Text messaging and retrieval techniques for a mobile health information system
Mobile phones have been identified as one of the technologies that can be used to overcome the challenges of information dissemination regarding serious diseases. Short message services, a much used function of cell phones, for example, can be turned into a major tool for accessing databases. This paper focuses on the design and development of a short message services-based information access algorithm to carefully screen information on human immunodeficiency virus/acquired immune deficiency syndrome within the context of a frequently asked questions system. However, automating the short message services-based information search and retrieval poses significant challenges because of the inherent noise in its communications. The developed algorithm was used to retrieve the best-ranked questionâanswer pair. Results were evaluated using three metrics: average precision, recall and computational time. The retrieval efficacy was measured and it was confirmed that there was a significant improvement in the results of the proposed algorithm when compared with similar retrieval algorithms
- âŠ