13,498 research outputs found

    Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs

    No full text
    Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex questions directly from textual sources on-the-fly, by computing similarity joins over partial results from different documents. Our method is completely unsupervised, avoiding training-data bottlenecks and being able to cope with rapidly evolving ad hoc topics and formulation style in user questions. QUEST builds a noisy quasi KG with node and edge weights, consisting of dynamically retrieved entity names and relational phrases. It augments this graph with types and semantic alignments, and computes the best answers by an algorithm for Group Steiner Trees. We evaluate QUEST on benchmarks of complex questions, and show that it substantially outperforms state-of-the-art baselines

    Quootstrap: Scalable Unsupervised Extraction of Quotation-Speaker Pairs from Large News Corpora via Bootstrapping

    Full text link
    We propose Quootstrap, a method for extracting quotations, as well as the names of the speakers who uttered them, from large news corpora. Whereas prior work has addressed this problem primarily with supervised machine learning, our approach follows a fully unsupervised bootstrapping paradigm. It leverages the redundancy present in large news corpora, more precisely, the fact that the same quotation often appears across multiple news articles in slightly different contexts. Starting from a few seed patterns, such as ["Q", said S.], our method extracts a set of quotation-speaker pairs (Q, S), which are in turn used for discovering new patterns expressing the same quotations; the process is then repeated with the larger pattern set. Our algorithm is highly scalable, which we demonstrate by running it on the large ICWSM 2011 Spinn3r corpus. Validating our results against a crowdsourced ground truth, we obtain 90% precision at 40% recall using a single seed pattern, with significantly higher recall values for more frequently reported (and thus likely more interesting) quotations. Finally, we showcase the usefulness of our algorithm's output for computational social science by analyzing the sentiment expressed in our extracted quotations.Comment: Accepted at the 12th International Conference on Web and Social Media (ICWSM), 201

    SNPredict: A Machine Learning Approach for Detecting Low Frequency Variants in Cancer

    Get PDF
    Cancer is a genetic disease caused by the accumulation of DNA variants such as single nucleotide changes or insertions/deletions in DNA. DNA variants can cause silencing of tumor suppressor genes or increase the activity of oncogenes. In order to come up with successful therapies for cancer patients, these DNA variants need to be identified accurately. DNA variants can be identified by comparing DNA sequence of tumor tissue to a non-tumor tissue by using Next Generation Sequencing (NGS) technology. But the problem of detecting variants in cancer is hard because many of these variant occurs only in a small subpopulation of the tumor tissue. It becomes a challenge to distinguish these low frequency variants from sequencing errors, which are common in today\u27s NGS methods. Several algorithms have been made and implemented as a tool to identify such variants in cancer. However, it has been previously shown that there is low concordance in the results produced by these tools. Moreover, the number of false positives tend to significantly increase when these tools are faced with low frequency variants. This study presents SNPredict, a single nucleotide polymorphism (SNP) detection pipeline that aims to utilize the results of multiple variant callers to produce a consensus output with higher accuracy than any of the individual tool with the help of machine learning techniques. By extracting features from the consensus output that describe traits associated with an individual variant call, it creates binary classifiers that predict a SNP’s true state and therefore help in distinguishing a sequencing error from a true variant

    Quality assurance of rectal cancer diagnosis and treatment - phase 3 : statistical methods to benchmark centres on a set of quality indicators

    Get PDF
    In 2004, the Belgian Section for Colorectal Surgery, a section of the Royal Belgian Society for Surgery, decided to start PROCARE (PROject on CAncer of the REctum), a multidisciplinary, profession-driven and decentralized project with as main objectives the reduction of diagnostic and therapeutic variability and improvement of outcome in patients with rectal cancer. All medical specialties involved in the care of rectal cancer established a multidisciplinary steering group in 2005. They agreed to approach the stated goal by means of treatment standardization through guidelines, implementation of these guidelines and quality assurance through registration and feedback. In 2007, the PROCARE guidelines were updated (Procare Phase I, KCE report 69). In 2008, a set of 40 process and outcome quality of care indicators (QCI) was developed and organized into 8 domains of care: general, diagnosis/staging, neoadjuvant treatment, surgery, adjuvant treatment, palliative treatment, follow-up and histopathologic examination. These QCIs were tested on the prospective PROCARE database and on an administrative (claims) database (Procare Phase II, KCE report 81). Afterwards, 4 QCIs were added by the PROCARE group. Centres have been receiving feedback from the PROCARE registry on these QCIs with a description of the distribution of the unadjusted centre-averaged observed measures and the centre’s position therein. To optimize this feedback, centres should ideally be informed of their risk-adjusted outcomes and be given some benchmarks. The PROCARE Phase III study is devoted to developing a methodology to achieve this feedback

    Multi-test Decision Tree and its Application to Microarray Data Classification

    Get PDF
    Objective: The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. Methods: We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Results: Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 1414 datasets by an average 66 percent. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. Conclusion: This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts

    Which sampling design to monitor saturated hydraulic conductivity?

    Get PDF
    Soil in a changing world is subject to both anthropogenic and environmental stresses. Soil monitoring is essential to assess the magnitude of changes in soil variables and how they affect ecosystem processes and human livelihoods. However, we cannot always be sure which sampling design is best for a given monitoring task. We employed a rotational stratified simple random sampling (rotStRS) for the estimation of temporal changes in the spatial mean of saturated hydraulic conductivity (Ks) at three sites in central Panama in 2009, 2010 and 2011. To assess this design's efficiency we compared the resulting estimates of the spatial mean and variance for 2009 with those gained from stratified simple random sampling (StRS), which was effectively the data obtained on the first sampling time, and with an equivalent unexecuted simple random sampling (SRS). The poor performance of geometrical stratification and the weak predictive relationship between measurements of successive years yielded no advantage of sampling designs more complex than SRS. The failure of stratification may be attributed to the small large-scale variability of Ks. Revisiting previously sampled locations was not beneficial because of the large small-scale variability in combination with destructive sampling, resulting in poor consistency between revisited samples. We conclude that for our Ks monitoring scheme, repeated SRS is equally effective as rotStRS. Some problems of small-scale variability might be overcome by collecting several samples at close range to reduce the effect of small-scale variation. Finally, we give recommendations on the key factors to consider when deciding whether to use stratification and rotation in a soil monitoring scheme

    Studies of Artists: An Annotated Directory

    Get PDF
    This annotated directory documents more than 80 different studies of artist populations. The directory provides information about how the researcher in each study has defined the artist and identified the population. Studies are arranged by type of artist population and, within each category, by study date. Each entry indicates, in so far as possible from available materials, the study investigator, the artist population, the way in which artists were identified, sampling procedures, number of respondents and response rates, and publications based on the study. This directory should provide researchers and other interested parties with a range of definitions, identification methods, and sampling procedures currently used in studies of artists. The introduction to the directory provides a critical overview of the numerous methods for identifying and defining "artists."
    • 

    corecore