9,332 research outputs found

    An Interpretable Knowledge Transfer Model for Knowledge Base Completion

    Full text link
    Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, \emph{ITransF}, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets---WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.Comment: Accepted by ACL 2017. Minor updat

    Quantitative Redundancy in Partial Implications

    Get PDF
    We survey the different properties of an intuitive notion of redundancy, as a function of the precise semantics given to the notion of partial implication. The final version of this survey will appear in the Proceedings of the Int. Conf. Formal Concept Analysis, 2015.Comment: Int. Conf. Formal Concept Analysis, 201

    On the automated extraction of regression knowledge from databases

    Get PDF
    The advent of inexpensive, powerful computing systems, together with the increasing amount of available data, conforms one of the greatest challenges for next-century information science. Since it is apparent that much future analysis will be done automatically, a good deal of attention has been paid recently to the implementation of ideas and/or the adaptation of systems originally developed in machine learning and other computer science areas. This interest seems to stem from both the suspicion that traditional techniques are not well-suited for large-scale automation and the success of new algorithmic concepts in difficult optimization problems. In this paper, I discuss a number of issues concerning the automated extraction of regression knowledge from databases. By regression knowledge is meant quantitative knowledge about the relationship between a vector of predictors or independent variables (x) and a scalar response or dependent variable (y). A number of difficulties found in some well-known tools are pointed out, and a flexible framework avoiding many such difficulties is described and advocated. Basic features of a new tool pursuing this direction are reviewed

    Ethical Reductionism

    Get PDF
    Ethical reductionism is the best version of naturalistic moral realism. Reductionists regard moral properties as identical to properties appearing in successful scientific theories. Nonreductionists, including many of the Cornell Realists, argue that moral properties instead supervene on scientific properties without identity. I respond to two arguments for nonreductionism. First, nonreductionists argue that the multiple realizability of moral properties defeats reductionism. Multiple realizability can be addressed in ethics by identifying moral properties uniquely or disjunctively with properties of the special sciences. Second, nonreductionists argue that irreducible moral properties explain empirical phenomena, just as irreducible special-science properties do. But since irreducible moral properties don't successfully explain additional regularities, they run the risk of being pseudoscientific properties. Reductionism has all the benefits of nonreductionism, while also being more secure against anti-realist objections because of its ontological simplicity

    Substructure Discovery Using Minimum Description Length and Background Knowledge

    Full text link
    The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our SUBDUE substructure discovery system based on the minimum description length principle. The SUBDUE system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. SUBDUE uses a computationally-bounded inexact graph match that identifies similar, but not identical, instances of a substructure and finds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by SUBDUE to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate SUBDUE's ability to find substructures capable of compressing the original data and to discover structural concepts important to the domain. Description of Online Appendix: This is a compressed tar file containing the SUBDUE discovery system, written in C. The program accepts as input databases represented in graph form, and will output discovered substructures with their corresponding value.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Learning to harvest information for the semantic web

    Get PDF
    This work was carried out within the AKT project (www.aktors.org), sponsored by the UK Engineering and Physical Sciences Research Council (grant GR/N15764/01), and the Dot.Kom project (www.dot-kom.org), sponsored by the EU IST asp part of Framework V (grant IST-2001-34038).In this paper we describe a methodology for harvesting in- formation from large distributed repositories (e.g. large Web sites) with minimum user intervention. The methodology is based on a combination of information extraction, information integration and machine learning techniques. Learning is seeded by extracting information from structured sources (e.g. databases and digital libraries) or a user-defined lexicon. Retrieved information is then used to partially annotate documents. An- notated documents are used to bootstrap learning for simple Information Extraction (IE) methodologies, which in turn will produce more annotation to annotate more documents that will be used to train more complex IE engines and so on. In this paper we describe the methodology and its implementation in the Armadillo system, compare it with the current state of the art, and describe the details of an implemented application. Finally we draw some conclusions and highlight some challenges and future work.peer-reviewe

    Extending FuzAtAnalyzer to approach the management of classical negation

    Get PDF
    FuzAtAnalyzer was conceived as a Java framework which goes beyond of classical tools in formal concept analysis. Specifically, it successfully incorporated the management of uncertainty by means of methods and tools from the area of fuzzy formal concept analysis. One limitation of formal concept analysis is that they only consider the presence of properties in the objects (positive attributes) as much in fuzzy as in crisp case. In this paper, a first step in the incorporation of negations is presented. Our aim is the treatment of the absence of properties (negative attributes). Specifically, we extend the framework by including specific tools for mining knowledge combining crisp positive and negative attributes.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    corecore