7,740 research outputs found

    Using graph-kernels to represent semantic information in text classification

    Get PDF
    Most text classification systems use bag-of-words represen- tation of documents to find the classification target function. Linguistic structures such as morphology, syntax and semantic are completely ne- glected in the learning process. This paper proposes a new document representation that, while includ- ing its context independent sentence meaning, is able to be used by a structured kernel function, namely the direct product kernel. The proposal is evaluated using a dataset of articles from a Portuguese daily newspaper and classifiers are built using the SVM algorithm. The results show that this structured representation, while only partially de- scribing document’s significance has the same discriminative power over classes as the traditional bag-of-words approach

    A Labeled Graph Kernel for Relationship Extraction

    Full text link
    In this paper, we propose an approach for Relationship Extraction (RE) based on labeled graph kernels. The kernel we propose is a particularization of a random walk kernel that exploits two properties previously studied in the RE literature: (i) the words between the candidate entities or connecting them in a syntactic representation are particularly likely to carry information regarding the relationship; and (ii) combining information from distinct sources in a kernel may help the RE system make better decisions. We performed experiments on a dataset of protein-protein interactions and the results show that our approach obtains effectiveness values that are comparable with the state-of-the art kernel methods. Moreover, our approach is able to outperform the state-of-the-art kernels when combined with other kernel methods

    Designing Semantic Kernels as Implicit Superconcept Expansions

    Get PDF
    Recently, there has been an increased interest in the exploitation of background knowledge in the context of text mining tasks, especially text classification. At the same time, kernel-based learning algorithms like Support Vector Machines have become a dominant paradigm in the text mining community. Amongst other reasons, this is also due to their capability to achieve more accurate learning results by replacing standard linear kernel (bag-of-words) with customized kernel functions which incorporate additional apriori knowledge. In this paper we propose a new approach to the design of ‘semantic smoothing kernels’ by means of an implicit superconcept expansion using well-known measures of term similarity. The experimental evaluation on two different datasets indicates that our approach consistently improves performance in situations where (i) training data is scarce or (ii) the bag-ofwords representation is too sparse to build stable models when using the linear kernel
    • …
    corecore