83 research outputs found

    Selectional Restriction Extraction for Frame-Based Knowledge Graph Augmentation

    Get PDF
    The Semantic Web is an ambitious project aimed at creating a global, machine-readable web of data, to enable intelligent agents to access and reason over this data. Ontologies are a key component of the Semantic Web, as they provide a formal description of the concepts and relationships in a particular domain. Exploiting the expressiveness of knowledge graphs together with a more logically sound ontological schema can be crucial to represent consistent knowledge and inferring new relations over the data. In other words, constraining the entities and predicates of knowledge graphs leads to improved semantics. The same benefits can be found for restrictions over linguistic resources, which are knowledge graphs used to represent natural language. More specifically, it is possible to specify constraints on the arguments that can be associated with a given frame, based on their semantic roles (selectional restrictions). However, most of the linguistic resources define very general restrictions because they must be able to represent different domains. Hence, the main research question tackled by this thesis is whether the use of domain-specific selectional restrictions is useful for ontology augmentation, ontology definition and neuro-symbolic tasks on knowledge graphs. To this end, we have developed a tool to empirically extract selectional restrictions and their probabilities. The obtained constraints are represented in OWL-Star and subsequently mapped into OWL: we show that the mapping is information preserving and invertible if certain conditions hold. The OWL ontologies are inserted inside Framester, an open lexical-semantic resource for the English language, resulting in an improved and augmented language resource hub. The use of selectional restrictions is also tested for ontology documentation and neuro-symbolic tasks, showing how they can be exploited to provide meaningful results

    Learning Ontology Relations by Combining Corpus-Based Techniques and Reasoning on Data from Semantic Web Sources

    Get PDF
    The manual construction of formal domain conceptualizations (ontologies) is labor-intensive. Ontology learning, by contrast, provides (semi-)automatic ontology generation from input data such as domain text. This thesis proposes a novel approach for learning labels of non-taxonomic ontology relations. It combines corpus-based techniques with reasoning on Semantic Web data. Corpus-based methods apply vector space similarity of verbs co-occurring with labeled and unlabeled relations to calculate relation label suggestions from a set of candidates. A meta ontology in combination with Semantic Web sources such as DBpedia and OpenCyc allows reasoning to improve the suggested labels. An extensive formal evaluation demonstrates the superior accuracy of the presented hybrid approach

    From logical forms to SPARQL query with GETARUNS

    Get PDF
    We present a system for Question Answering which computes a prospective answer from Logical Forms produced by a full-fledged NLP for text understanding, and then maps the result onto schemata in SPARQL to be used for accessing the Semantic Web. As an intermediate step, and whenever there are complex concepts to be mapped, the system looks for a corresponding amalgam in YAGO classes. It is just by the internal structure of the Logical Form that we are able to produce a suitable and meaningful context for concept disambiguation. Logical Forms are the final output of a complex system for text understanding - GETARUNS - which can deal with different levels of syntactic and semantic ambiguity in the generation of a final structure, by accessing computational lexical equipped with sub-categorization frames and appropriate selectional restrictions applied to the attachment of complements and adjuncts. The system also produces pronominal binding and instantiates the implicit arguments, if needed, in order to complete the required Predicate Argument structure which is licensed by the semantic component

    The Lexical Grid: Lexical Resources in Language Infrastructures

    Get PDF
    Language Resources are recognized as a central and strategic for the development of any Human Language Technology system and application product. they play a critical role as horizontal technology and have been recognized in many occasions as a priority also by national and spra-national funding a number of initiatives (such as EAGLES, ISLE, ELRA) to establish some sort of coordination of LR activities, and a number of large LR creation projects, both in the written and in the speech areas

    Flexible Views for View-based Model-driven Development

    Get PDF
    Modern software development faces the problem of fragmentation of information across heterogeneous artefacts in different modelling and programming languages. In this dissertation, the Vitruvius approach for view-based engineering is presented. Flexible views offer a compact definition of user-specific views on software systems, and can be defined the novel ModelJoin language. The process is supported by a change metamodel for metamodel evolution and change impact analysis

    Linguistically Based QA by Dinamyc LOD Access from Logical Form

    Get PDF
    We present a system for Question Answering which computes a prospective answer from Logical Forms (hence LFs) produced by a full-fledged NLP for text understanding, and then maps the result onto schemata in SPARQL to be used for accessing the Semantic Web. As an intermediate step, and whenever there are complex concepts to be mapped, the system looks for a corresponding amalgam in YAGO classes. This is what happens in case the query to be constructed has [president,'United States'] as its goal, and the amalgam search will produce the complex concept [PresidentOfTheUnitedStates]. In case no class has been recovered, as for instance in the query related to the complex structure [5th,president,'United States'] the system knows that the cardinal figure '5th' behaves like a quantifier restricting the class of [PresidentOfTheUnitedStates]. In fact LFs are organized with a restricted ontology made up of 7 types: FOCus, PREDicate, ARGument, MODifier, ADJunct, QUANTifier, INTensifier, CARDinal. In addition, every argument has a Semantic Role to tell Subject from Object and Referential from non-Referential predicates. Another important step in the computation of the final LF, is the translation of the interrogative pronoun into a corresponding semantic class word taken from general nouns, in our case the highest concepts of WordNet hierarchy. The result is mapped into classes, properties, and restrictions (filters) as for instance in the question: Who was the wife of President Lincoln ? which becomes the final LF: be-[focus-person, arg-[wife/theme_bound], arg-['Lincoln'/theme-[mod-[pred-['President']]]]] and is then turned into the SPARQL expression, ?x dbpedia-owl:spouse :Abraham_Lincoln where "dbpedia-owl:spouse" is produced by searching the DBpedia properties and in case of failure looking into the synset associated to the concept as WIFE. In particular then, the concept "Abraham_Lincoln" is derived from DBpedia by the association of a property and an entity name, "President" and "Lincoln", which contextualizes the reference of the name to the appropriate referent in the world. It is just by the internal structure of the Logical Form that we are able to produce a suitable and meaningful context for concept disambiguation. Logical Forms are the final output of a complex system for text understanding - GETARUNS - which can deal with different levels of syntactic and semantic ambiguity in the generation of a final structure, by accessing computational lexical equipped with sub-categorization frames and appropriate selectional restrictions applied to the attachment of complements and adjuncts. The system also produces pronominal binding and instantiates the implicit arguments, if needed, in order to complete the required Predicate Argument structure which is licensed by the semantic component

    From logical forms to SPARQL query with GETARUNS

    Get PDF
    We present a system for Question Answering which computes a prospective answer from Logical Forms produced by a full-fledged NLP for text understanding, and then maps the result onto schemata in SPARQL to be used for accessing the Semantic Web. As an intermediate step, and whenever there are complex concepts to be mapped, the system looks for a corresponding amalgam in YAGO classes. It is just by the internal structure of the Logical Form that we are able to produce a suitable and meaningful context for concept disambiguation. Logical Forms are the final output of a complex system for text understanding - GETARUNS - which can deal with different levels of syntactic and semantic ambiguity in the generation of a final structure, by accessing computational lexical equipped with sub-categorization frames and appropriate selectional restrictions applied to the attachment of complements and adjuncts. The system also produces pronominal binding and instantiates the implicit arguments, if needed, in order to complete the required Predicate Argument structure which is licensed by the semantic component

    Recognizing Textual Entailment Using Description Logic And Semantic Relatedness

    Get PDF
    Textual entailment (TE) is a relation that holds between two pieces of text where one reading the first piece can conclude that the second is most likely true. Accurate approaches for textual entailment can be beneficial to various natural language processing (NLP) applications such as: question answering, information extraction, summarization, and even machine translation. For this reason, research on textual entailment has attracted a significant amount of attention in recent years. A robust logical-based meaning representation of text is very hard to build, therefore the majority of textual entailment approaches rely on syntactic methods or shallow semantic alternatives. In addition, approaches that do use a logical-based meaning representation, require a large knowledge base of axioms and inference rules that are rarely available. The goal of this thesis is to design an efficient description logic based approach for recognizing textual entailment that uses semantic relatedness information as an alternative to large knowledge base of axioms and inference rules. In this thesis, we propose a description logic and semantic relatedness approach to textual entailment, where the type of semantic relatedness axioms employed in aligning the description logic representations are used as indicators of textual entailment. In our approach, the text and the hypothesis are first represented in description logic. The representations are enriched with additional semantic knowledge acquired by using the web as a corpus. The hypothesis is then merged into the text representation by learning semantic relatedness axioms on demand and a reasoner is then used to reason over the aligned representation. Finally, the types of axioms employed by the reasoner are used to learn if the text entails the hypothesis or not. To validate our approach we have implemented an RTE system named AORTE, and evaluated its performance on recognizing textual entailment using the fourth recognizing textual entailment challenge. Our approach achieved an accuracy of 68.8 on the two way task and 61.6 on the three way task which ranked the approach as 2nd when compared to the other participating runs in the same challenge. These results show that our description logical based approach can effectively be used to recognize textual entailment

    Building web service ontologies

    Get PDF
    Harmelen, F.A.H. van [Promotor]Stuckenschmidt, H. [Copromotor

    Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007

    Get PDF
    This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p
    corecore